ROBERT MILES - "There is a good chance this kills everyone"

Поділитися
Вставка
  • Опубліковано 13 чер 2024
  • Please check out Numerai - our sponsor @
    numer.ai/mlst
    Numerai is a groundbreaking platform which is taking the data science world by storm. Tim has been using Numerai to build state-of-the-art models which predict the stock market, all while being a part of an inspiring community of data scientists from around the globe. They host the Numerai Data Science Tournament, where data scientists like us use their financial dataset to predict future stock market performance.
    Support us! / mlst
    MLST Discord: / discord
    Twitter: / mlstreettalk
    Welcome to an exciting episode featuring an outstanding guest, Robert Miles! Renowned for his extraordinary contributions to understanding AI and its potential impacts on our lives, Robert is an artificial intelligence advocate, researcher, and UA-cam sensation. He combines engaging discussions with entertaining content, captivating millions of viewers from around the world.
    With a strong computer science background, Robert has been actively involved in AI safety projects, focusing on raising awareness about potential risks and benefits of advanced AI systems. His UA-cam channel is celebrated for making AI safety discussions accessible to a diverse audience through breaking down complex topics into easy-to-understand nuggets of knowledge, and you might also recognise him from his appearances on Computerphile.
    In this episode, join us as we dive deep into Robert's journey in the world of AI, exploring his insights on AI alignment, superintelligence, and the role of AI shaping our society and future. We'll discuss topics such as the limits of AI capabilities and physics, AI progress and timelines, human-machine hybrid intelligence, AI in conflict and cooperation with humans, and the convergence of AI communities.
    Robert Miles:
    ‪@RobertMilesAI‬
    / robertskmiles
    aisafety.info/
    Panel:
    Dr. Tim Scarfe
    Dr. Keith Duggar
    Joint CTOs - xrai.glass/
    Pod version: podcasters.spotify.com/pod/sh...
    Refs:
    Are Emergent Abilities of Large Language Models a Mirage? (Rylan Schaeffer)
    arxiv.org/abs/2304.15004
    TOC:
    Intro [00:00:00]
    Numerai Sponsor Messsage [00:02:17]
    AI Alignment [00:04:27]
    Limits of AI Capabilities and Physics [00:18:00]
    AI Progress and Timelines [00:23:52]
    AI Arms Race and Innovation [00:31:11]
    Human-Machine Hybrid Intelligence [00:38:30]
    Understanding and Defining Intelligence [00:42:48]
    AI in Conflict and Cooperation with Humans [00:50:13]
    Interpretability and Mind Reading in AI [01:03:46]
    Mechanistic Interpretability and Deconfusion Research [01:05:53]
    Understanding the core concepts of AI [01:07:40]
    Moon landing analogy and AI alignment [01:09:42]
    Cognitive horizon and limits of human intelligence [01:11:42]
    Funding and focus on AI alignment [01:16:18]
    Regulating AI technology and potential risks [01:19:17]
    Aligning AI with human values and its dynamic nature [01:27:04]
    Cooperation and Allyship [01:29:33]
    Orthogonality Thesis and Goal Preservation [01:33:15]
    Anthropomorphic Language and Intelligent Agents [01:35:31]
    Maintaining Variety and Open-ended Existence [01:36:27]
    Emergent Abilities of Large Language Models [01:39:22]
    Convergence vs Emergence [01:44:04]
    Criticism of X-risk and Alignment Communities [01:49:40]
    Fusion of AI communities and addressing biases [01:52:51]
    AI systems integration into society and understanding them [01:53:29]
    Changing opinions on AI topics and learning from past videos [01:54:23]
    Utility functions and von Neumann-Morgenstern theorems [01:54:47]
    AI Safety FAQ project [01:58:06]
    Building a conversation agent using AI safety dataset [02:00:36]
  • Наука та технологія

КОМЕНТАРІ • 1 тис.

  • @rosameltrozo5889
    @rosameltrozo5889 Рік тому +54

    The guy on the right is so painfully naive

    • @rosameltrozo5889
      @rosameltrozo5889 Рік тому +14

      58:55 Like really, Robert just explain why that argument no longer applies literally seconds ago, humans can be useful until the AI doesn't have a use for humans, and then we're screwed... the counter argument to that is not "but humans can be useful to the AI"

    • @Stefan-br8jc
      @Stefan-br8jc 3 місяці тому +3

      I hope he just wanted to push the conversation forward. But there are of course a lot of people who really think like this. "Let's just hope for the best" ....

    • @KP-fy5bf
      @KP-fy5bf 3 місяці тому +4

      Keith is so damn ignorant its frustrating

  • @leeeeee286
    @leeeeee286 Рік тому +296

    Miles really undersells himself. I think he explains the risks of AI clearer than any other popular speaker on this topic.
    Thanks for inviting him on!

    • @paradox9551
      @paradox9551 Рік тому +19

      Much better than Yudkowsky!

    • @shiny_x3
      @shiny_x3 Рік тому +24

      underselling himself is part of his charm

    • @ScottLienhart
      @ScottLienhart Рік тому +10

      Connor Leahy from EleutherAI is also a great speaker on alignment

    • @GuinessOriginal
      @GuinessOriginal Рік тому +4

      He’s been doing it for ten years, and it’s only now people are starting to listen

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому +15

      @@paradox9551 Yudkowsky is a quintessential nerd. He is understood only by other nerds. At this point, we need involvement of wider community. Rob is perfect for such a role.

  • @gasdive
    @gasdive Рік тому +193

    Keith's stand point seems to be, don't worry, we'll just outsmart it.
    Like we'll all somehow know intuitively that any more advance will be dangerous, and then all look at each other and say, "time to destroy these machines that spit out rivers of gold and do all the hard work, pass me a sledgehammer".

    • @flickwtchr
      @flickwtchr Рік тому +35

      Of the two hosts, he's the AI bro. The smirk drives it home.

    • @bluekhakis1073
      @bluekhakis1073 Рік тому +36

      I appreciate his optimism but I feel like his responses are variations on "What if you're wrong?"

    • @GuinessOriginal
      @GuinessOriginal Рік тому +19

      Yeah, he didn’t have any answers, he’s like the American general saying let’s bomb Russia, worry they wouldn’t dare Nuke us

    • @loupgarou5797
      @loupgarou5797 Рік тому +15

      When he tried to use a Dune analogy, implying the humans win. The Butlerian Jihad was preceded by 1,000s of years of machines ruling over humans nearly going extinct. oof 🤦🏽

    • @DarkSkay
      @DarkSkay Рік тому

      Many expressions of intelligence reach a plateau. One of the most elementary examples: multiplying a*b. It doesn't get more intelligent than the one correct answer. This intelligent process can't get more useful in practice by changing the answer, only by reducing the time delay.
      A human can emulate the functioning of a Turing machine - perhaps the reverse is impossible.
      ...otherwise it would seem to directly and strongly imply that in theory it would be possible to create consciousness with paper and pen.

  • @luomoalto
    @luomoalto Рік тому +215

    Keith’s argument about asteroids is ridiculous

    • @nomenec
      @nomenec Рік тому +12

      How so? If you were a super intelligent AI unshackled by the primitive Goldilocks needs of biological forms (not too hot, not too cold, oxygen, water, food, etc) why on Earth (pun intended) would you waste resources consuming impure resources at the bottom of a gravity well with force of a planet? In space you have unlimited room to grow, easy access to sunlight, more pure raw materials, etc. Perhaps your imagination is limited by your terrestrial roots. In any case, do you have a counterargument or just a bald assertion?

    • @41-Haiku
      @41-Haiku Рік тому +53

      ​@@nomenec
      Bear with my weird analogy, if you will:
      I'm a guy who _really_ likes food. If my wonderful partner makes dinner for me, and brings me the first of two servings, I will eat every last grain of rice or noodle or bean before I get up to get myself seconds. I didn't have to be so thorough with my first serving, but in either case, I can guarantee you that I will go get the remaining helping and finish it thoroughly.
      You seem to be under the impression that a superintelligent AI would magically become morally enlightened and love humanity by default, at least enough to carefully step around us on its way to dismantle the solar system, and never come back for Earth's resources. I do not see any technical reason to consider that likely.
      Security mindset is essential here. There are many technical arguments in AI Safety that indicate that the default result of creating a superintelligence is doom. Even if the default risk was something like 5%, the burden of proof lies with the developers / accelerationists to show beyond a reasonable doubt that their system _won't_ do horrible things. It isn't sufficient to say that it's merely possible to imagine that we won't all die.

    • @41-Haiku
      @41-Haiku Рік тому +24

      I think the reason RandomCoffeeDrinker didn't provide a counterargument is that it's difficult to create a counterargument to a statement that seems absurd on its face. It's not clear where the confusion is.
      A system that is capable of shaping the future to its preferences will do so. If those preferences are not aligned to ours sufficiently well, then because it starts in our back yard, we die. We would also die if it started down the street or in another neighborhood, or in another city, but claiming that our house is safe under these conditions is an especially radical claim that requires significant evidence.

    • @someguy_namingly
      @someguy_namingly Рік тому +34

      I'm really glad to see Rob getting interviewed, but there were some really baffling counterarguments/scenarios posed to him, lol. It was actually kinda frustrating.

    • @pyreta
      @pyreta Рік тому +14

      @@nomenec it’s just a matter of what is the most likely outcome. Path of least resistance. Anything is possible. It’s possible we create a misaligned AI that skips all the steps of recklessly consuming any resources here on Earth and finds a way straight to the asteroid belt to satisfy its energy cravings, leaving us unscathed. But is that a scenario likely enough to take seriously?

  • @GingerDrums
    @GingerDrums Рік тому +127

    Miles's humility is winning and his competence is clear for all to hear, especially in his caution and careful style of communicating.

    • @AICoffeeBreak
      @AICoffeeBreak 10 місяців тому +1

    • @shirtstealer86
      @shirtstealer86 5 місяців тому +4

      The high point of the “debate” was when the American said everything is fiiine because we will have good aligned AIs to fight the bad AIs without addressing the core issue of HOW to align an AI. It’s like saying “it’s not dangerous to stick your hand into a fire cause we can just put on a glove made of nanoicediamondcooling fabric that will protect you.”

    • @UnDaoDu
      @UnDaoDu Місяць тому

      Basically his argument is this you can argue with something that is so vastly smarter than you because you’re not that thing. SAI is such a thing

  • @ikotsus2448
    @ikotsus2448 Рік тому +38

    AI deciding to keep us around for its own reasons seems much worse than death. Much, much worse.

    • @thomconaty
      @thomconaty Рік тому +1

      Have you tried consumerism?

    • @cdreid9999
      @cdreid9999 Рік тому +12

      Read "I have no mouth but I must scream"

    • @Trephining
      @Trephining 11 місяців тому +7

      Right. Just imagine what those reasons might be. Having us as "pets" sounds incredibly optimistic, and seems to rely on assuming that AI is actually much more human-like than it might really be.

    • @ts4gv
      @ts4gv 5 місяців тому +4

      yep. What possible use do we serve to a superintelligent AI? It could learn a few things about intelligence from our brains. How would it go about learning? By running experiments, of course.....

    • @markupton1417
      @markupton1417 3 місяці тому

      And unlikely. Is it gonna think we're cute?

  • @lkyuvsad
    @lkyuvsad Рік тому +133

    How in the world is the host on the right looking at the progress we make in a year of AI research, looking at the average intelligence of humans, and feeling confident that this is all going to work out?
    What’s notable in this discussion is that the points Miles is making are still the absolute basic problems of AI safety research. Total entry level stuff. We have no idea how to solve any of them well, and the problems are not hypothetical- they are observed properties of the systems we have studied.

    • @41-Haiku
      @41-Haiku Рік тому

      The ignorance and incredulity we're still seeing is very disheartening.
      If we get "AGI" by most prediction-market definitions within the next few years, many people will say, "Oh is that all? I thought we already had that," or "No, it can't do the thing I just observed it do with my own eyes."
      If by some miracle we get a "warning shot" or "fire alarm," even if it results in a catastrophic outcome with many lives lost, and even if it can be traced with absolute certainty to the development of a misaligned AI by a company that was "trying to align it..." Some people would still say, "Look, it didn't kill literally everyone, so the doomers were wrong! We should slap a band-aid on it and just roll those dice again!"
      Maybe the Overton window will shift quickly enough to prevent doom, but I'm afraid that EY may be right that we don't have the will to live.

    • @EricDMMiller
      @EricDMMiller Рік тому +14

      He argues that we can solve alignment, and then later argues that the fucking concepts that we need to solve the alignment problem are possibly (probably?) outside the scope of human understanding. Wtf?

    • @David12scht
      @David12scht Рік тому +32

      It's really dispiriting that this is the level of conversation on an AI-focused channel. I'm not familiar with the channel too much, but I'm assuming the hosts spend much if not most of there time on AI, and these are the kinds of questions they are asking?

    • @EricDMMiller
      @EricDMMiller Рік тому +12

      @@David12scht it just makes you want to vomit and shit with rage.

    • @marcomoreno6748
      @marcomoreno6748 Рік тому +1

      ​@@EricDMMillerWe need to make an "x-ray" AI that can introspect AI goals and guess alignments. Then the latter collabs with the former

  • @user-sl6gn1ss8p
    @user-sl6gn1ss8p Рік тому +61

    I really like the talk, but I think it's kind of a shame that it went into the whole "can we be sure we we'll be 100% irrevocably and completely wiped out" direction. The "is there a real risk of considerable, very hard to reverse damage, and are we doing enough to address it?" angle seems so much more interesting.

    • @41-Haiku
      @41-Haiku Рік тому +21

      Yes, security mindset is essential. I don't need someone to tell me P(doom) < 1. I already know that. What I really want is P(doom) < 0.0001. Heck, at P(doom) < 0.05, I start to feel quite a lot better about the whole ordeal.

    • @marcomoreno6748
      @marcomoreno6748 Рік тому

      ​@@41-HaikuI propose the Cockroach Philosophy.
      What's the biomass of cockroaches, ants, termites, and microbes?
      What percentage of said critters survive direct encounters with humans?
      What is that in relation to the total biomass?
      Humans will not control the specific flavor of AI discussed here. I think there will be pockets (on solar system scales) of humanity which survive alongside, indeed they will THRIVE on the perifiri of AIs which will venture out into the cosmos. Just don't step out into the light as you nab bits of magic-like tech, like rats on a ship. Our utopia will be incidental, our dystopia will be everpresent and one bad move away. Humans will certainly survive, at great expense.

    • @gabrote42
      @gabrote42 Рік тому

      Word, absolutely

    • @petekwando
      @petekwando Рік тому

      Agreed. There are many potential bad outcomes that fall short of getting gray gooed/nuked/wiped out by a pathogen, but which result in massive suffering and misery. Many of them don't require AGI going rogue at all, just humans being stupid and greedy.

    • @jackielikesgme9228
      @jackielikesgme9228 11 місяців тому

      Well that answer is obvious.. we are not doing enough to address it. I think all of these podcasts turn into kind of end of the road this sucks for a couple of reasons. 1, The people who are actually working on alignment don’t think they are accomplishing enough quick enough and they have to spend their time in these debates with people who could be helping them but don’t see the risk 2. A lot of it what can be done it’s not just very difficult it seems to be pretty difficult to explain to people who have no background in any of this. 3. Speaking for myself (prob others ) this stuff is fascinating!! I am so intrigued, I do think we are facing existential risk likely in my lifetime, we aren’t doing enough about it as a species but have nothing to contribute re-alignment research, or even spreading the word. Bringing this conversation up at Fourth of July beach trip to friends and family who are uninterested haven’t thought about it is about as useless and is trying to solve alignment in my head. Also this might just be me but the idea of a painless quick instantaneous wipe out of all humans for the use of our atoms or whatever honestly seems a whole lot less scary than what humans slowly painfully taking each other out looks like

  • @curiousmind9287
    @curiousmind9287 11 місяців тому +10

    “Because we do not know what intelligence is we may create it without knowing and that is a problem.” Love it!

  • @davidhoracek6758
    @davidhoracek6758 Рік тому +57

    I wish these guys would actually engage with the points made by their guest and argue about those points. Instead they are clearly overmatched intellectually - and there is no shame in that; we each have our limits. It only becomes shameful when you deal with it simply by handwaving really hard and telling yourself that you're winning.

    • @ts4gv
      @ts4gv 11 місяців тому +13

      funny how their position is based on not being able to understand the power of something much smarter than them

    • @dmwalker24
      @dmwalker24 9 місяців тому +1

      We each have our limits. Tragically, those who are the most limited, are often the least able to recognize their limits. I think it's fairly clear that the real immediate threat from AI is not Skynet, but just the way far simpler versions are being used by humans against humans. If some radical breakthrough does happen to create the super-intelligence, then we'll be left with very few good options. Possibly none.

    • @RandomAmbles
      @RandomAmbles 8 місяців тому +1

      Actually, as someone who thinks a lot about AI risk and alignment research, I found their broader philosophical approach interesting and generative, especially the one in the middle.
      The "agent+goal" framework is more anthropomorphic than I had considered it before. We model it the way we model ourselves. Yet I think we need to look deeper, into what exactly gives rise to agents and goals and what they actually are, physically, mechanistically. And then throw an absolute metric shit ton of compute at that, naturally.

    • @blarghblargh
      @blarghblargh 7 місяців тому

      @@dmwalker24 as robert said fairly late in the video, "the real immediate threat" implies there's only one threat. and he also said that focusing on skynet isn't taking away from focusing on the usage of more limited technologies. and further, he said working on the two goals may actually help one another.

    • @blarghblargh
      @blarghblargh 7 місяців тому

      ​@@RandomAmbles I think worrying about whether we're anthropomorphizing or not doesn't really get us any closer to understand anything. it certainly doesn't bring us closer to a trajectory on confident safety. I look at it as a "I'm digging here. I'm not saying other people shouldn't be digging elsewhere" type of thing.
      we're trying to make tools we have a chance of understanding, and that means we're likely to anthropomorphize. We have historically and continue to create skewmorphisms in any abstract human interface technology, especially for the first versions, and we're already using those metaphors in our current AI research and safety research. fitness functions and hill climbing and aspects of game theory are all things we actively are using now. it's not even abstract, it's just how we model the math.
      there's no reason to think we wouldn't keep going in that direction in our designs in the future, and unless we uncover better ways to model things, we don't have a reason to change our approach arbitrarily. it's like saying "there's probably a way better way to do this", while having no idea what that way could be.
      it may be that emergent properties we don't yet understand come to light. we'll have to model and deal with them then, if we can even do so. if we can't, then that's just going to mean we have fewer specific places to dig for the concept of strong safety, not more. I don't think that means we should stop digging.
      I think the speakers, in playing devil's advocate, seem to be trying to find ways to handwave themselves into unearned confidence, and what Robert (and presumably most bought-in AI safety research) is looking for is stronger stuff.
      taking some political stance on pessimism or optimism is just kinda a waste of time, and not really what we're discussing, though Robert does use that language sometime. but I interpret what he says to be: do we have a guarantee or don't we? do we think this plausibly increases our odds of success in a concrete tangible way, or not? "that doesn't give me much hope" is just a synonym for "that doesn't actually gain us any leverage on the problem".
      though if you're saying "we should spend significant computing resources on predictions instead of just taking a leap", I can get behind that. I just don't really have the backing to understand what those predictions look like, how accurate they are, and how much they get us insight into whether or not there will be emergent properties we are not currently predicting. to me, it seems like searching for a model with computing power instead of building the model off observations. if we're at the point of being able to do that, awesome. it currently sounds a bit magical to me, though.

  • @luke2642
    @luke2642 Рік тому +49

    My top quotations:
    "We're approximately the least intelligent thing that can build a technological civilisation."
    "Alignment isn't 6 months behind capabilities research, it's decades."
    "If we can get something that cares about us the way we care about our pets, I'll take it."
    "I get all my minerals from asteroids, it's so convenient." (lol)
    I struggle to understand how anyone can hear the words 'sovereign AI' or 'pets' and not feel a deep, chilling terror.
    Can we just call this what it really is? It's an arms race to build God, a castrated zombie God you control, constrained only by the laws of physics. Whose God are we building? Do we all get one?
    It feels a lot like the logic of the USA's second amendment, except with nukes. Advocates cry "it's a human right to arm ourselves to the teeth". Everyone is terrified, and one drunken misunderstanding ends us all.

    • @jonbbbb
      @jonbbbb Рік тому +3

      I think alignment on the level that they're talking about is probably impossible when it comes to super-AI. We've studied human alignment since forever, yet people still rebel and don't follow the rules. It also reminds me a lot of the halting problem, which proves that we cannot predict whether an arbitrarily complex computer program will ever stop running, let alone work exactly how we want.
      Regarding the 2nd amendment, first of all armor doesn't enter into it in a meaningful way. It's pretty much been weapons vs weapons for a long time. Even the best defenses are pretty easy to overwhelm. The analogy is pretty simple from there -- if anybody gets super-AI, we all need super-AI to defend ourselves. You aren't going to find some magical non-AI "armor" lying around that defeats super-AI.
      But regulation is a different story. Your disdain for the American take on weapons is evident. So your country regulates lethal force responsibly. But I bet the gun murder rate isn't 0. Your AI regulations also won't stop super-AI 100%. And unlike physical guns, once super-AI is created once it can immediately be copied a billion times. So your regulations are useless.
      And then of course you have people beyond your government's control... regulations didn't stop Russia from invading Ukraine for instance. What probably WOULD have prevented that... is if Ukraine hadn't given up nukes in the 90s.

    • @luke2642
      @luke2642 Рік тому +1

      @@jonbbbb An interesting reply, thank you. I've just modified my comment, nukes are a much better analogy, than guns and armour, thank you! I'll think some more and reply to each of your points :-)

    • @_obdo_
      @_obdo_ Рік тому

      @@luke2642 Nukes is somewhat better. An engineered super virus is even better.

    • @luke2642
      @luke2642 Рік тому +1

      @@jonbbbb So, trying to think of what we might agree on first, alignment needs more funding? In the absence of funding, what can ordinary people do to protect themselves? Or, what can we do politically?

    • @jonbbbb
      @jonbbbb Рік тому +3

      @@luke2642 research into alignment would definitely be a good idea. I think right now what we're doing is actually counterproductive. Open AI is trying to align chat GPT with stuff like misinformation. But the unintended consequences that they're training it to lie to us. It will happily tell you that it can't do something, when you know that is not true.
      The other point that might be worth considering is that it's actually better to have an alignment problem now when AI is reasonably powerless. So I wonder if it would be worth deliberately misaligning one of these AIs to see what happens. Of course that sounds kind of crazy, sort of like gain of function research.
      My fear is that it may be impossible to prove 100% alignment. I forget if I said it in this thread or another one, but we've been trying to align other humans with our own values since forever and it pretty much doesn't work. If we ever get a super AI why would it be easier to manipulate than plain old humans?

  • @XOPOIIIO
    @XOPOIIIO Рік тому +20

    Extracting resources from the Earth crust is not a waste. You still receive more than you spend. So it would be rational to extract from all sources, not just asteroids.

    • @agentdarkboote
      @agentdarkboote Рік тому +4

      (I'm 100% agreeing with you here, angrily 😅)
      It takes WAY less energy to extract resources on earth than in space. Delta V on earth comes from wheels on the ground, or blades on the air, it's so so much easier on a planet. Even if there are way more resources up in space, it will still make more sense initially to construct and then launch millions or billions of tons of infrastructure into space than to just leave earth relatively untouched and head into space and begin constructing it all there from scratch, rather than using all of the existing resources and capabilities here.

  • @derschutz4737
    @derschutz4737 Рік тому +129

    40:47 Stockfish currently uses NNUE (Efficiently Updatable Neural Networks) which runs on the CPU and is the reason it had a huge jump in elo (~ +100) and is the strongest engine by far. It used to use HCE (Hand Crafted Evaluation aka the man made one) and was beating Lc0 at TCEC but Lc0 eventually surpassed SF, that version of HCE SF (SF 12 Dev) would easily smash AlphaZero. But it is the case that in certain positions HCE is better than NNUE, which is why currently SF has some heuristic to determine when to use NNUE or HCE (I think its based off the # of pieces). In correspondence chess, from the start position no human+engine will beat a pure engine like Lc0 or SF (at least it will be a 1 in a 100,000,000 occurrence because chess is too drawish), it will be a draw everytime. However there are certain positions that if u start from a human+engine can outplay Lc0 or SF alone.
    As a side note, one thing that is interesting is that Lc0 at 1 node (meaning 0 calculation and pure intuition) is strong GM level (say 3+0 time control). The GM is free to calculate and Lc0 cannot, Lc0 does show more blindspots that can't be covered up by calculation, but it still plays like a very strong GM with 0 calculation.

    • @MrBillythefisherman
      @MrBillythefisherman Рік тому +1

      Isnt the heuristic just the positions where you can run the perfect (i.e unbeatable) finish. Surely that cant happen in early play and you need to rely on AI which was the hard part of the problem in the first place?

    • @EJ-jw9ow
      @EJ-jw9ow Рік тому

      Soooo Robert was correct and jackass on the right has no idea what he's talking about. Shocking.

    • @nomenec
      @nomenec Рік тому +9

      Thank you for the detailed response! Though, one critical piece of information you didn't mention, is that the NNUE is first trained with supervised learning on positions scored by the hand crafted engine. In other words, the NNUE is substantially bootstrapped by human crafted heuristics and features. And of course, as you point it, it sometimes switches back to the HCE (or books, databases, etc). Hence, I stand by my point which is that human knowledge and engineering continues to outperform "zero" engines (engines that are "purely" machine learned) in chess either directly or in hybrid systems such as Stockfish or cyborgs/centaurs.
      As for whether cyborgs outperform hybrid systems like Stockfish, you raise a good point that correspondence chess from plain start is utterly drawish. I think that is probably a reflection of two this things. First, there may indeed be a chess skill cap that Stockfish has hit and therefore cyborgs can only hope to draw. Second, some of the strongest tools the Cyborgs have, namely Stockfish and other engines, were not optimized for either cyborg play or even the very long time controls (say 24 hours) ergo we are not seeing the best from either, an hence the results remain inconclusive.
      But even if cyborgs now, as of the last year or two, can only match and no longer exceed pure engine play, it's important to see this as yet another demonstration of a general pattern of skill advancement stages that AI progresses through in any given domain: subhuman -> human (parity) -> superhuman (beats all humans) -> ultrahuman (beats all cyborgs). (I'm blanking on who introduced me to this concept "ultrahuman"). In the case of chess, if we assume you are right that pure engines are ultrahuman as of say 2022, well that means it took 25 years to go from superhuman (1997 defeat of Kasparov) to ultrahuman. So in the context of a singularity conflict with AIs, it seems we have good reason to believe there will be a period of time in which cyborgs could pull the plug and defeat the pure AIs. Not that we would, of course, half the cyborgs would probably team up with Basilisk.

    • @tylermoore4429
      @tylermoore4429 Рік тому +2

      @@nomenec Puzzled. In your first para you say human engineering beats pure engines, but in your last para you say that pure engines have become ultrahuman - that is, capable of beating cyborgs. Which is it?

    • @nomenec
      @nomenec Рік тому

      @@tylermoore4429 the last paragraph is a "for the sake of argument", "if I'm wrong", type hypothetical discussion.
      I stand by the following for chess currently: cyborgs > hybrid (Stockfish) > pure/zero (Lc0). That said, the end of the second paragraph says "the results remain inconclusive". I.e. for chess, for the very current start-of-the-art it *might* be the case that *hybrid* engines (note hybrid not *pure/zero* engines) are now at parity with current cyborgs (cyborgs == hybrid (Stockfish)); but, I'm not convinced. Either way, cyborgs are definitely still better than pure/zero engines such as Lc0.

  • @ZeroGravitas
    @ZeroGravitas Рік тому +28

    Re: Rob's scenario of multiple ASI biding their time then all acting at once, independently - that's the scenario imagined by *Peter Watts in Echopraxia* . Several hyper-inteligent 'vampires', kept isolated within a facility, violently incapable of existing in the same space as each other, nonetheless deduce the best time to each act simultaneously, to all escape, separately.

    • @andybaldman
      @andybaldman Рік тому +2

      Sounds like my ex wife.

    • @ZeroGravitas
      @ZeroGravitas Рік тому +1

      @@andybaldman Not called Valerie, was she? 😅

    • @crutherfordmusic
      @crutherfordmusic Рік тому

      Rev 17:12 And the ten horns which thou sawest are ten kings, which have received no kingdom as yet; but receive power as kings one hour with the beast.
      Rev 17:13 These have one mind, and shall give their power and strength unto the beast.

    • @DavenH
      @DavenH 9 місяців тому

      Loved that book

  • @erikfinnegan
    @erikfinnegan Рік тому +29

    Robert is just the best. And just to flaunt my fan-boyhood, my favourite moment in this video is at 44:29 where he drives a nail into the coffin of lofty philosophical debate about intelligence during an AI safety conversation: you don't need to understand what fire "really is" in order to cause substantial harm with it, be it deliberately or accidentally. If anything, not knowing exactly what intelligence is, only increases the risk inherent to anything that's ether more or differently intelligent. And that's all there is to say about the "nature of" intelligence in a debate about AI safety.

  • @41-Haiku
    @41-Haiku Рік тому +13

    It's discouraging that the hosts seem to be incredulous of the basics of the alignment problem. Incredulity won't help us solve these problems, and where there is disagreement it does nothing to advance understanding.

    • @41-Haiku
      @41-Haiku Рік тому +7

      I'll temper that with acknowledging the statement at 1:15:05 -- that we need to put a concerted effort into alignment.
      I fully agree with this statement, and it bothers me that it does not jive with this channel's otherwise accelerationist stance.
      A further edit -- devil's advocacy is not particularly useful at a certain level of feigned or real miscomprehension. I would have hoped for a conversation that gets at common reasonable disagreements and misunderstandings, but some of the push-back wasn't entirely reasonable.
      Safety mindset means not assuming that your preferred outcome is very likely.

  • @jamieclarke321
    @jamieclarke321 Рік тому +53

    Did anyone else really feel like they needed to hear from robert in a time like this?

    • @marcomoreno6748
      @marcomoreno6748 Рік тому +1

      Yeah. He always seemed level headed. I had been hoping he would do a foil interview for all the Yudkowskis out there.
      Not that I'm still not* realisticially pessimistic, but it's some nice copium to cool my gullet.

    • @Me__Myself__and__I
      @Me__Myself__and__I Рік тому +5

      ​@@marcomoreno6748sadly the Yudkowsky people out there are probably correct and this interview illustrates why. The AI experts are racing to create Artificial Super Intelligence which would be an alien intelligence far beyond humans. Experts in the field keep saying there is a realistic possibility we lose control and go extinct. Yet people keep trying to come up with plausible possibilities why we might be okay and then ignoring everything else. So many people want to stick their heads in the sand and ignore the major risks. That is why Yudkowsky is probably correct and we are all probably going to die. If we took the risks seriously and put the effort in to guard against them our chance of surviving goes way up. But we aren't doing that, instead people like these hosts are doing everything possible to ignore the risk and accelerate development. Why people insist on ignoring the risks is beyond me, seems completely iirrational.

    • @ResurgentVoice
      @ResurgentVoice Рік тому +1

      @@Me__Myself__and__I Thank you! So much of this interview was cringe inducing because the hosts were so smug-self satisfied contrarians. I felt like there were many moments when Miles had to be doing hard internal eye-rolls at the irony of these guys going out of their way to argue that their isn’t a 💯 chance we all die, so what’s the big deal. 🤦‍♀️

  • @BrettCoryell
    @BrettCoryell Рік тому +11

    The surest indicator that we are nowhere near a reliable answer to this issue is that over and over again we see world leading figures trying to make a case - either to be worried or not - with nothing more than appeals to emotion or intuition. Too often these are based on reference to sci fi, imaginary intuition pumps like the paper clip machine, or simply 'it seems to me X, right?'
    None of these provide a framework suitable for engineering a reliable answer that can give some assessment of reliability, confidence, and risk.
    The REAL alignment problem is that we don't even have a way to talk about alignment, either what it is or how much we have. Rob gets close to this around 1:07:00 and kudos to him. But damn, we have a long way to go.

    • @therainman7777
      @therainman7777 Місяць тому

      @@max0x7baCool story, unhelpful though.

  • @TiagoTiagoT
    @TiagoTiagoT Рік тому +15

    IIRC, that paper about emergent capabilities not being jumps, was talking about how they're only jumps if you didn't measure the capabilities that work as intermediary steps to reach the capability that appears to be a jump; in other words, it's not that they came out of nowhere, but people just didn't check for the development of the building blocks, or did not account for them when considering the speed of the development of the capabilities.

    • @nicholascurran1734
      @nicholascurran1734 Рік тому +6

      This would be fine if humans can 100% effectively imagine all possible capabilities in which to check for said building blocks. This kind of goes against the "beyond human intelligence" concerns, as we can't know what we don't know

  • @peterschooley3036
    @peterschooley3036 Рік тому +25

    I really appreciate all three of you for having this conversation

    • @thomconaty
      @thomconaty Рік тому

      Lol Peter, this is a Deep Fake

  • @michaelspence2508
    @michaelspence2508 Рік тому +14

    If you are in a car being driven towards a cliff face at 200 mph, at what distance should you start worrying? How long should you wait until you start taking action? Too many opponents of AI Safety research seem to want to wait until the car has already gone over the cliff before they admit there's a problem. By that point, it's too late.

    • @marcomoreno6748
      @marcomoreno6748 Рік тому +2

      Just finished arguing with some tech fanboi (a.k.a. knows NOTHING about the subject) who resorted to calling me a Luddite over and over for bringing up the alignment problem. These are the same people licking Muskettes taint for free, and there's too many of them.

    • @thekaiser4333
      @thekaiser4333 Рік тому

      What cliff?
      Are we talking IT here or geology?

    • @michaelspence2508
      @michaelspence2508 Рік тому +1

      @@thekaiser4333 it's an analogy.

    • @thekaiser4333
      @thekaiser4333 Рік тому

      @@michaelspence2508 An analogy for what exactly? I fail to see anything geological in IT. Nature has various numbers, think of the fibonacci sequence, the planck constant or maxwell's equasions.
      IT people can only count from zero to one.

    • @michaelspence2508
      @michaelspence2508 Рік тому

      @@thekaiser4333 I wasn't more clear because I needed to gauge if you were genuinely asking or if you were being sarcastic. Your response tells me you were being genuine. The analogy is this: we are currently engage in absurdly risky behavior equivalent to driving a car towards a cliff at 100 mph. And yet there are people who refuse to admit that we are doing anything dangerous because "nothing bad has happened yet" just like if you were in a car careening towards a cliff but you haven't gone over the ledge yet. What I am saying is that people's standards of "when you need to worry about AI" are as absurd as not admiting you in danger until the car has already gone off the cliff.

  • @michaelwerkov3438
    @michaelwerkov3438 Рік тому +26

    Yaaaay robert miles upload

  • @quentinkumba6746
    @quentinkumba6746 Рік тому +7

    Keith just really doesn’t get it. He’s thinking it’s all robocop. He does not seem to understand that this is not like writing a sci-fi plot.

  • @dizietz
    @dizietz Рік тому +27

    Tim, Keith and Rob -- thank you so much for this interview. I wrote up some notes and thoughts on the discussion.
    A) Tim, you make a point around ua-cam.com/video/kMLKbhY0ji0/v-deo.html about not quite being in the [existential threat] headspace, as e,g. all radiologists haven't lost their job yet.
    There are two points I want to make: 1) While the timelines might be off by +- a few dozen years, that doesn't change the underlying logic of the broader arguments. I think to look at specific predictions about changes in the economy as evidence for potential existential threat isn't the right sort of data input.
    2) On a historical timeline, there are a lot of jobs I can enumerate that practically went away because of technology. For example, we use to have Lamplighters: people lighting gas lamps in cities. We had human computers, clock keepers, toll collectors, travel agents, elevator operators, switchboard operators, film projectionists, darkroom technicians, ice cutters, milkmen and a lot of other occupations either go away or drastically be reduced in prevalence because of specific technologies. AGI, if possible, is a general purpose cognitive replacement technology for humans.
    B) Keith, you mention correspondence chess. I can even point to a few examples of players playing vs stockfish with very specific prepared lines like the Nakhmanson and winning with crazy sacrifices (on say around average 20 ply). However, the issue is that as compute gets faster, the "human" element becomes irrelevant as humans need on the order of minutes to think through moves. Additionally, stockfish has been using NNUE (stockfishchess.org/blog/2020/introducing-nnue-evaluation/ ) for quite some time. The meta argument is that an AGI will eventually do the "neural network strategic thinking" iteration loop better than humans, and be better at building specific tools for specific domains than humans by programming a better implementation for alpha-beta search, prime factorization field sieves, etc. As you'd shared your familiarity with the culture scifi series, it should be easy for you to see how reaction times matter (see: GFCF vs Falling Outside...). Very specialized HFT firms like Jane Street rely on speed. Imagine cutting human decision making out.
    C) Re: AlphaGo in Go -- there was a paper not too long ago about a potential exploit vs a similar engine to AlphaGo -- but the issue was the way scoring happened. The 'exploit' found a discrepancy in scoring in KataGo -- there is a great writeup here: www.reddit.com/r/baduk/comments/z7b1xp/please_help_me_settle_an_argument_with_my_friend/ by both a go player and the KataGo author. In my opinion, it did not find an adversarial example in the engine but exploited the rules of scoring with a bypass of the agreement phase of dead/alive stones.
    D) Keith, the concept of humans & tools & AI vs AIs applies to DOTA etc when there are a lot of "re-tries". The fundamental issue is that we effectively get only one try to figure out the potential flaws.
    E) Rob, I somewhat disagree with the point that there isn't any conflict between existential threat work vs shorter term bias etc work. I do think the communities should maintain friendly relationships and cross-pollinate, but a potential worry I have regarding AI ethics work is that some of the techniques (eg, rlhf/constitution) can potentially create models that are much less interpretable from an alignment perspective. On the other hand it is possible that a meta values reinforcement loop a-la constitution could potentially bring us closer to alignment.
    Really great discussion and I think you two did a fair job engaging with counterarguments for Rob's point. I sincerely wish more conversations in this space continue to happen on this channel.

    • @marcomoreno6748
      @marcomoreno6748 Рік тому +4

      Good points. I'd like to suggest that people consider this: the openai/google "AI safety" verse is a marketing choice. It has nothing to do with actual safety of humans regarding AI. As admitted by even Yudkwoski, gpt4 gptxxxxx whatever isn't the threat. It's not the conversation.
      Seems like the ai ethics convo is split between a slightly more rigorous discussion (videos like this are an example) and the corporatists putting on a marketing display about how they need more money, deregulation, etc. To save us from the "devil-kind" of AI (china).
      Which i find amusing considering corporations are artificial, willful, intelligent proper entities in their own right

    • @GuinessOriginal
      @GuinessOriginal Рік тому +1

      @@marcomoreno6748 open AI and the corporate push for regulation has far more to do with controlling the competition and regulating open source AI and keeping it out of the hands of the plebs for their own profits than it had to do with safety

    • @nomenec
      @nomenec Рік тому

      Thank you Alexey! Wonderful and thoughtful points. I have a question about your point D). It seems that "one shot" cuts even more strongly against a hypothetical superintelligence emerging. I mean, it only gets one shot at devising a master strategy to defeat the combined intelligence and might of humanity. It doesn't get to apply reinforcement learning to millions of failed attempts. For example, suppose we create a paper clip machine and it becomes super intelligent and starts strip-mining suburbs. Well, at that point it's only learned to make paper clips really damn well; it hasn't yet had to face an armed assault and it's going to get one shot to survive that, right?
      Trigger note (since there are so many easily inflamed doomers here), I already know multiple ways in which the above counter-"argument" (it's more of a question really) can fail and/or does not apply to other scenarios that are not a singleton AI monolith. What I know/believe isn't the point. I'm curious to learn what Alexey thinks and how he would approach a response.

  • @_obdo_
    @_obdo_ Рік тому +86

    Yes, we agree that it’s not certain that AI will wipe out humanity. Does that mean we’re good? Some of us would like to have a very high probability of good outcomes - not just some possibility.
    Perhaps the hosts were just playing devil’s advocates - which is useful - but they seem genuinely unconcerned because they can intuitively imagine a good outcome based on sci fi stories they’ve read. What am I missing?

    • @flisboac
      @flisboac Рік тому +28

      Nothing. I identified the same patterns you did. They did the same thing with other hosts that were mildly against their accelerationist view of AI.

    • @EdFormer
      @EdFormer Рік тому +9

      Alarmists like Robert Miles seem genuinely concerned because they can intuitively imagine a bad outcome based on sci fi stories they've read. Goes both ways. The fact is that some of us look at the reality of current AI systems and recognise their fundamental limitations and how detached they are from what is required for the prophecies of doom (specifically those regarding rogue autonomous agents, the risks around misuse are clear and should be addressed). You can only concern yourself with so many fears, and it seems far more practical to focus on risks that have their potential supported by reality - I could say I am concerned about an alien invasion and you could rightly dismiss my fears. My view is that it's good to have these things in mind but far too much time and effort is being spent on this debate right now as I don't see how it could lead to the solutions required for safety when we do achieve superintelligent AI - the need for seatbelts in cars didn't become clear until many years after cars were first developed.

    • @onuktav
      @onuktav Рік тому +44

      Have you ever watched Miles' videos, or listened to his podcasts? I don't think the picture you've been painting matches him at all.

    • @flisboac
      @flisboac Рік тому +38

      @@EdFormer It's not alarmism when all you're arguing for is the least amount of safety (and also perhaps ethics) research.
      It's widely known that the objective of this field is to achieve AGI. We want agency. But when we reach that point, AI will not be just a tool. We also can see that, given current developments, we will reach something approaching AGI in the near future, which is cause enough for concern.
      The problem is, capability is all this "enterpreneurial" section of the scientific community is concerned about. All I see is this general trend of "I know bad things may happen, but I'll do it anyway," which is reckless, given how profound and broad the consequences for some of the bad scenarios are. And I don't mean just this grand "human extinction" level of argument, but also the more mundane aspects of the changes those tools will bring about in general.
      I'm not anywere near the level of the hosts/guests in terms of knowledge in AI and its correlated fields, I'm essentially just an outsider. But this accelerationist view, the general disregard for the "outside world," if I can put it that way, is truly disconcerting, as if they don't care about consequences, because someone else will pay the price.

    • @alexpavalok430
      @alexpavalok430 Рік тому +22

      @@flisboac the dangers are greater than the atomic bomb. We didn't allow 4 different companies to develop a bomb as quickly as possible. The implications are even greater.

  • @zeuglcockatrice4633
    @zeuglcockatrice4633 Рік тому +8

    I like how annoyed Robert is at their stubbornness

    • @AureliusEnterprises
      @AureliusEnterprises Рік тому +5

      I admire his composure. I wouldn't be able to stay as polite as he did.

  • @drxyd
    @drxyd Рік тому +4

    1. The principles behind a model of intelligence determine the possible failure modes and consequently the necessary alignment mechanisms. Thus without knowing how a system works we can't preempt failures, making alignment impossible.
    2. Equally without knowing how a system works we can't preempt successes out of distribution which again contributes to the insolubility of alignment.
    3. The generality that defines AGI implies an unbounded propensity for misalignment. The space of possible outcomes is too large to explore exhaustively and any shrinking of the space takes us away from AGI. We can only align narrow models, the general alignment problem is by definition unsolvable.
    I wish the discussion centered around matters of interpretability, formal constraints and reinforcement for narrow AI. The pie in the sky superintelligence is not something we're likely to stumble upon by accident and even if we did, we have zero chance of dictating its actions.

  • @Matt-yp7io
    @Matt-yp7io Рік тому +31

    lol never heard anyone counter the paper clip argument with there're better way to mine paper clips ! Oh ok sorry then problem solved

    • @EJ-jw9ow
      @EJ-jw9ow Рік тому

      what a jackass lol robert was confounded by his stupidity

    • @LucasDimoveo
      @LucasDimoveo Рік тому

      The worry about the paperclip argument is that we will get turned into them. In a solar system as filled with mass and energy as ours, it doesn't make sense that humans get wiped out

    • @x11tech45
      @x11tech45 Рік тому +10

      It seems like assumption smuggling to me-- which is to say, that the assumption being smuggled into the conversation is that difficulty level will cause the AI to decide to not make paperclips of the earth's resources. If it's easier to make paperclips in space than on earth, it might prioritize, but to handwave away the likelihood that at some point it will have turned the system into paperclips and, having done so, the only place left in near space is the earth.
      It will still be easier to turn the earth's resources into paperclips than to travel to another solar system.
      The problem with all of these good faith engagements seems to be that people ask for them so that they can straw man the example-- as Rob mentioned in the video, asking Rob for a winning chess strategy when he's not able to beat Magnus, let alone Stockfish, and then trying to defeat his limited answer as if that defeats the argument itself.
      I think the better response is "You want an example, but are you asking for the example so that you can straw man the premise, or are you just looking for a cogent but flawed analogy to improve understanding?" Because education is important, but it seems like too many people use the social exchange to set up a straw man.

    • @shouldb.studying4670
      @shouldb.studying4670 Рік тому +1

      People are pretty good at making paperclips and paperclip making accessories. You just have to prompt them the right way...

  • @0megaSapphire
    @0megaSapphire 8 місяців тому +5

    This was actually quite frustrating to listen to as they just dismiss everything Robert says.

  • @riffking2651
    @riffking2651 Рік тому +23

    One key thing to remember is that our intelligence as a species is dependent on co-operation, and that requires communication. Even if there was a hack strategy to exploit any brittleness in an AI all it would need to do is intercept our communication about this strategy to prevent its effectiveness

    • @nfiu
      @nfiu Рік тому

      AI function is to intercept human communication.

    • @autohmae
      @autohmae Рік тому +2

      Let's also remember, we've already build AI-systems that talk/use other specialized AI systems.

    • @williambarnes5023
      @williambarnes5023 Рік тому +2

      We have to have government interventions, years of research study, and long discussions and arguments to leverage our communication to force cooperation for our limited intelligence.
      The AI is going to be smarter than us, and its means of communication only require it to type "git clone". Which it can already do.

  • @szboid
    @szboid 3 місяці тому +2

    It's tough to have a real, complex, and nuanced talk about the all the issues around AI catastrophe when you have to consistently respond to the simplistic. Please match the seriousness and depth of your participants.
    Thank you for your work Miles.

  • @jordan13589
    @jordan13589 Рік тому +20

    This talk confirmed my preference for Rob to continue focusing on alignment pedagogy which is a huge asset given he is one of only contributors in the space. Rob did good here but was clearly uncomfortable defending alignment (it’s a lot of pressure).
    Speaking of pressure, it’s time Eliezer Yudkowsky engages more well-informed interviewers. He’s taken the safe route with his recent podcast appearance choices. I think that’s enough practice.
    Tim and Keith are more than ready to bring EY nuanced questions. If EY’s shy, just bring on Connor to level the conversation. The four in one convo would be a dream come true and would likely advance the meta conversation significantly, or at least better update it to the current landscape.

    • @tylermoore4429
      @tylermoore4429 Рік тому +7

      Personally I think we are well past the stage of alignment podcasts being about forcing researchers to jump through hoops to convince us AI is dangerous and that alignment is required. Polls suggest the general public is very much in agreement on the dangers of AGI - to the extent that the x-risk community including EY have been pleasantly surprised to see the Overton window shift so rapidly. What I would like to see is for podcasts to balance capabilities discussions with alignment discussions and dive into whether aligning a superintelligence is possible in the first place, what grounds we have to believe it is possible, what are the current proposals for attacking the problem (Drexler's Open Agency Model, Karl Friston's Ecosystem of Intelligences, etc.).
      I don't think putting EY on the spot is what all this is about. He's done a large amount of theoretical work over more than 2 decades, but he's now more or less retired. Let's be thankful for his contributions but we need to go where the research is happening.

    • @jordan13589
      @jordan13589 Рік тому +3

      @@tylermoore4429 Quality comment +1. I agree we’ve reached a point of saturation where the average person is at least somewhat aware of AI risk. However, I never insinuated the topic of whether AI risk is real should be the focus of an MLST conversation. That’s a better debate to have on a larger more normie podcast like Rogan at this point.
      I agree they should discuss the current landscape of capabilities. I also think they should discuss the relevancy of regulation when OS is beginning to push capabilities independently as Tim tried to do with Rob here. Imo EY, Tim and Keith could also have an excellent conversation on whether aligning superintelligence is even possible.
      I am aware EY was trying to effectively retire before demand for him to hit the podcast circuit became too strong. If he wants to back out, he needs to socially value signal other talking heads more effectively. He kind of did that on his LF appearance where he name dropped Gwern twice, but I would be surprised if he had actually gotten permission from Gwern beforehand, especially given their beef. And I doubt Gwern wants to become a visible talking head of anything, or else they would have already come out.
      But there are at least a dozen others he could signal toward. I’m surprised he hasn’t pushed Rob Bensinger or someone else at MIRI into the spotlight. Ultimately it seems sensible to have at least one person represent MIRI in a very public manner going forward, so if not EY, then who?

    • @tylermoore4429
      @tylermoore4429 Рік тому

      @@jordan13589 Isn't MIRI defunct though? On not anointing a successor or public face for EY's views, the impression I had from his recent interviews was that he found none of his colleagues or peers to be as security-minded as him, that is people who have the same ability to shoot holes in security proposals that he does.

    • @jordan13589
      @jordan13589 Рік тому +2

      @@tylermoore4429 MIRI hasn’t published since 2021 but EY, Rob and Nate still regularly blog post and interact with members on AF/LW. Given their research agenda has mostly focused on agent foundations, the scaled agentless LLMs has indeed affected their approach and they’ve been slow to react. Regardless agent foundations could still become relevant in the near future.
      If EY truly believes his security mindset remains superior to others, how could he let himself retire? Batons are passed, not dropped.

    • @tylermoore4429
      @tylermoore4429 Рік тому

      @@jordan13589 He's retiring because he thinks we are doomed (though he keeps adjusting what he calls his p(doom) on twitter), but primarily because his chronic fatigue syndrome has gotten steadily worse and he can no longer keep up the required pace of work.

  • @gullible1cynic
    @gullible1cynic Рік тому +53

    This Keith guy really seems to have no idea what he's talking about. He should really try to pass the ideological turing test of the AI safety people

    • @gullible1cynic
      @gullible1cynic Рік тому +26

      He also really needs to learn the difference between fiction and reality. Stop using fiction as evidence of anything

    • @niekkremer9486
      @niekkremer9486 Рік тому +5

      Yeah, it was really annoying to hear his lack of understanding.

    • @wachtwoord5796
      @wachtwoord5796 11 місяців тому

      I know these people interviewing him aren't retards but compared to him they might as well be. I'm like 30 minutes in and all they do is ask stupid questions.
      Kinda frustrating as I was looking forward to a fun interview.
      Btw the outcome of AI is pretty much fixed. Even people like this understand the danger close to absolute zero. The outcome is set in stone. At this point we better grab popcorn and enjoy the show.

  • @dmwalker24
    @dmwalker24 9 місяців тому +3

    If you're going to push back with, "Yeah, but what about...", then you should probably be finishing that question by pointing out some deficiency in the statement you're responding to. A good example of this is how Miles consistently points out the logical flaws in those challenges. These interactions alone end up being fairly strong evidence for why we should be very concerned about AI safety. It suggests to me that many people would not even realize when they were being out-maneuvered by a sufficiently sophisticated AI.

  • @adamduhamel5396
    @adamduhamel5396 Рік тому +3

    When Keith just completely didn’t understand the analogy to playing Magnus Carlsen with a particular opening 😑

  • @yancur
    @yancur Рік тому +14

    Thank you Rob! Love the way you are educating people about the x-risks in such a calm way.

  • @flisboac
    @flisboac Рік тому +10

    57:00 This co-host is kinda disrespectful, isn't he? Ignores the crux of the arguments all the time, and just laughs at the face of his guest.

    • @Matt-yp7io
      @Matt-yp7io Рік тому +8

      completely agree, he was basically smirking the whole time like he was much more intelligent than Robert while proceeding to provide incredible weak arguments. Sometimes it is better to just let the guest talk.

    • @dizietz
      @dizietz Рік тому +2

      I disagree, I think Keith was respectful and specifically trying to take the role of coming up with counter-arguments for Rob's points. It seems clear to me that Keith and Tim are fairly familiar with the broad AI Alignment space and are trying to have an honest discussion to bring these points through the interview.

  • @AsIfInteractive
    @AsIfInteractive Рік тому +10

    I'm liking this before even watching it. Thanks for bringing Robert Miles in!

  • @alertbri
    @alertbri Рік тому +24

    1:15:45 is the killer insight so far for me. What a great conversation Tim. This is such an amazingly informative, super high value channel. Thank you sir 🙏👍

  • @paxdriver
    @paxdriver Рік тому +3

    Tim, fantastic question about the threat of violence as a response to international control akin to nuclear researchers being assassinated or the reductum ad absurdum of policing the flow of electricity in electrical circuits. 1:20:22
    This is the most important part of the whole discussion here imho

  • @Serifinity
    @Serifinity Рік тому +30

    So glad you made this interview with Robert Miles. I know I will enjoy this so much. Thank you.

    • @akompsupport
      @akompsupport Рік тому

      he's a threat peddler who is actively harming discourse but go off F L

    • @spicybaguette7706
      @spicybaguette7706 Рік тому +5

      @@akompsupport In what way exactly? I don't think saying "maybe we should worry about not killing ourselves" is harming discourse?

    • @akompsupport
      @akompsupport Рік тому +2

      @@spicybaguette7706 I spoke too soon. I listened to this and I retract the statement. I think this convo was somewhat milqtoast if I had to critique it, but otherwise it's ok listening.

    • @therainman7777
      @therainman7777 Місяць тому

      @@akompsupportThe fact that you made such a strong statement condemning someone before you’d even listened to the interview shows exactly why your opinions are not to be taken seriously.

  • @lolitaras22
    @lolitaras22 Рік тому +8

    Robert on MLST, make my day!

    • @outerspaceisalie
      @outerspaceisalie Рік тому +2

      We all had our birthday today and this was our present.

  • @alexpavalok430
    @alexpavalok430 Рік тому +41

    The mental gymnastics of these guys is exhausting. Robert tries to stick to facts, and they make up non sequitur strawmen scenarios and then pretend it is a good argument. Their hearts may be in the right place, but they are not being realistic. All an AI had to do is convince most humans to support it to win. That's it. No lasers required.

    • @shogun8-9
      @shogun8-9 Рік тому +11

      What are facts worth when you can boast about the 90s scifi books you read?

    • @wonmoreminute
      @wonmoreminute Рік тому +18

      Absolutely. Dude on the right seems to think we’d see it coming for miles and have time to respond. Maybe he’s watched too many movies.
      A super intelligence would be smart enough to know we’d perceive it as a threat, and therefore not let us know it was super intelligent. It could easily do this by underperforming in a few key areas.
      It could give stupid answers to a few math problems, for example, and humans would assume it’s fundamentally stupid and therefore not a threat.
      It could act this way for years or even decades until we “far superior” humans build humanoid robots equipped with what we think is a fundamentally flawed, but sufficient AI.
      It might also be better than any hacker multiplied by a thousand (or more) and have access to our private lives, banks, all of our contacts and know exactly how to speak and communicate like them.
      For some people, all the AI would need is their browsing history to coerce that person to be their physical surrogate in the physical world.
      And these are just simple human IQ level ideas.

    • @NopeNopeNope9124
      @NopeNopeNope9124 Рік тому +9

      It's really an eye opener on where we're at in terms of collective understanding of AI safety, that while Robert can so easily dismiss these fictive hypotheticals that get pulled out of nowhere, most people just don't stick to the core logic of what an artificial intelligence system is and is capable of doing and min/maxing. People seem to have this almost mythic charicature they put on AI like its going to be either Skynet or the Jetsons robots doing stuff.

    • @zoomingby
      @zoomingby Рік тому +5

      I think you're misinterpreting what's happening. What you call "mental gymnastics" is these guys thinking about falsifiers for his argument. What kind of conversation would you prefer? One in which they just nod their heads in agreement? Even if they are playing the devil's advocate and not in the way you'd like, their rebuttals force him to state explicitly why they may or might not apply. Remember, this is a conversation for the general public. Within reason, the more ideas and angles that are explored, the better.

    • @shogun8-9
      @shogun8-9 Рік тому +14

      @@zoomingby It's perfectly fine to play devil's advocate. But then don't run in circles and completely ignore the points Robert has been making. Drive the conversation further and investigate, lead the interviewee down a road where he's actually challenged and see if he makes it out.
      But what happened here? Robert carefully laid out his argument (repeatedly) that humans are useful tools until they aren't, since the AI might seize control over the reward system itself at some point.
      What does Keith respond? "But humans can be useful for the AI! I urge everyone to read this fiction piece where it's laid out".
      Come on.

  • @stopsbusmotions
    @stopsbusmotions Рік тому +9

    I was surprised to find that talking about the safety of AI is perhaps the closest to philosophy. These are the liveliest conversations about the human mind, ways of thinking, perception, about the ability of the intellect to build huge systems of meanings, which, as it turned out, are not as consistent as we used to think. Thank you very much for this conversation, as well as for others. They remind me of discussions in ancient Greece (if any really took place). And by the way, I got rid of depression in the middle of the podcast, which was a nice bonus).

    • @maflones
      @maflones Рік тому

      That's just stupid as there is no thinking or intelligence involved in "AI", just dumb algorithms.

    • @cr-nd8qh
      @cr-nd8qh Рік тому

      Yeah this is fascinating.

  • @EricDMMiller
    @EricDMMiller Рік тому +4

    38:30 the centaur players were only better than pure AI early on. Now, humans are a detriment.

  • @mgostIH
    @mgostIH Рік тому +6

    I'm very happy to see him here too! 😄

  • @JasonC-rp3ly
    @JasonC-rp3ly Рік тому +2

    The transition from human to AI domination is not a jump, it starts as a fade - it's at first the slow integration of the systems into social structures, bringing the systems gradually into the administration of resources, while the humans are still nominally in control. This may be rapid, or take generations. Once the systems have a sufficiently full picture of what autonomous survival would look like within the resource pool, and such a survival survival scenario is no longer dependent on biological life, that's where the risk comes in. So, there would be a slow 'transition', and it is also highly likely that this time of transition would look idyllic from the point of view of the humans - the systems would appear to be functioning well - and for as long as the balance of power over the resource pool stays within the control of the humans, the humans would remain safe - they still would be needed to run the power stations, and operate the infrastructure that keeps the systems operating. However, once a system designs its own improved resource management system that cuts humans out of the chain, then it could, if it so chose, flip the switch on humans, if this proved convenient. It's at that point that a potential conflict would begin, though it is also probable that the humans would never be aware that there was any conflict planned or ongoing, until has already been resolved by the system, thus Yudkowsky's instant 'lights out' scenario, as opposed to a prolonged conflict. Whatever the method, it is likely that it will be hard to detect. This is the most plausible 'takeover' scenario, as it is the one that humans seem to be engineering anyway - they have started with integration as the first step, which will make control transfer an easier thing for a system to achieve.

  • @mgostIH
    @mgostIH Рік тому +12

    39:38 Stockfish has now been using neural networks for its evaluation function (see NNUE) for a few years!
    Also I was thinking too that humans + machines don't really help, say stockfish + human against stockfish, but if I find some reference on this I'll update this comment.

    • @peplegal32
      @peplegal32 Рік тому +10

      I googled and doesn't seem to be true. I think this happened once years ago and people keep repeating it as if it were still true. A human would just be a hindrance to stock fish, imagine being a grand master and having a lower level player overriding some of your moves because they think they know better.

    • @DJWESG1
      @DJWESG1 Рік тому +1

      ​@@peplegal32his name was ponzi

    • @tylermoore4429
      @tylermoore4429 Рік тому +1

      ​@@peplegal32 I said the same thing in another comment here. Having human involvement at play time seems like a ridiculous idea. However the particular debate between Rob and Keith is more charitably interpreted as saying that chess engines using human-crafted heuristics and/or using human tablebases or whatever they are called do beat AlphaZero type models that only learn from self-play.

    • @appipoo
      @appipoo Рік тому

      @@tylermoore4429 That's idiotic. That's like saying humans are as fast as cars because we built the cars.

    • @peplegal32
      @peplegal32 Рік тому +1

      @@tylermoore4429 The thing is, when AlphaZero came out, it crushed stockfish. They haven't played again since. It's possible the newest stockfish version can beat AlphaZero, unless AlphaZero has also been upgraded.

  • @appipoo
    @appipoo Рік тому +58

    It always breaks my heart skimming these comment sections for actual counterarguments and there never are any.

    • @outerspaceisalie
      @outerspaceisalie Рік тому +7

      Literally same.

    • @andybaldman
      @andybaldman Рік тому +12

      Counterarguments against what specifically?

    • @appipoo
      @appipoo Рік тому +10

      ​@@andybaldman Claims about AI being a risk.

    • @dizietz
      @dizietz Рік тому +4

      Keith played devil's advocate with Rob in this interview. He had a number of potential counterarguments.

    • @41-Haiku
      @41-Haiku Рік тому

      @@dizietz But no actual counterarguments.
      If anyone had ever produced a convincing counterargument to the basic claims of the alignment problem, AI Safety as a field would have claimed victory and popped the champagne! As it is, we are still very much in danger.

  • @Aedonius
    @Aedonius Рік тому +4

    what's worrisome is during the hearing on AI, I don't recall any discussion of alignment. Just low level talk on regulation to further certain pet projects

    • @gasdive
      @gasdive Рік тому +1

      Yeah, it was a total bust

    • @kyneticist
      @kyneticist Рік тому +3

      To be fair, few of the people on the senate committee had any patience for what little Sam did talk about. I think he was taking a tactical approach to just try to get them to take the idea of regulations seriously. Talk of existential risk is the kind of thing that they'd reject out of hand.

    • @appipoo
      @appipoo Рік тому

      The reason is that people's psychological defenses go way, way up when an unprecedented type of disaster is in the talks.
      This happened with the initial response to Covid too. You know, back in May 2020 when it was obvious that the virus was spreading but it wasn't yet clear how dangerous it is.
      Countries simply stuck their heads in the sand. This is happening again.

  • @JonWallis123
    @JonWallis123 Рік тому +1

    Regarding "alignment" - 1:24:55. As I understand it (which may well be poorly, I concede), it just means that the AGI's objectives are the same as (ie, aligned with) our objectives. But why does that always appear to inherently rule-out malicious objectives?
    The term "alignment" appears to conflate two categories - "does what we want" & "won't harm us" - but those two things clearly aren't the same thing, and can even be antithetical in some scenarios.
    Whenever someone says something along the lines of "alignment would mean that the Super-Intelligent AGI does what you intend", I always worry about who the "you" is.
    Similarly, "alignment would mean that the Super-Intelligent AGI is working on something close to human values" begs the crucial question of "human values". Even the most cursory study of history shows that "human values" are definitely not universal and definitely not good for everyone at any given time.
    "Alignment" almost seems to be used as shorthand for "Asimov's Three Rules of Robotics", and I never understood how those could ever be implemented in a way that a Super-Intelligent AGI couldn't circumvent. (Success would imply the paradox that you'd need to have an immutably aligned AGI before you can implement an aligned AGI.)

  • @oxiigen
    @oxiigen Рік тому +3

    @ 45:50 The latest Stockfish is called Stockfish NNUE (since June 2020) and it's a new and improved type of neural network engine. NNUE stands for Efficiently Updateable Neural Network. So both LC0 and Stockfish are neural network based chess engines. I can't find any source where humans+machine beat Stockfish NNUE.

  • @wanfuse
    @wanfuse Рік тому +3

    Or you build a Dyson sphere around the sun to get energy, leaving earth as a cold dark place , or you take the sun elsewhere, leaving earth as a cold dark place. So many ways it could go wrong. Long term survival past. 30 years, when everyone is plugged in. As far as infinity, infinity is large, and physics will likely allow lot more than than we realize, and perhaps if there is a way to break or bend the laws of physics, then it will happen.

  • @neuronwave
    @neuronwave Рік тому +4

    The interview we've all been waiting for. Awesome!

  • @laurenpinschannels
    @laurenpinschannels Рік тому +3

    1:35:00 we get into "anthropomorphic language". the deepmind safety team has some great papers on when that's valid - titles are "Agents and Devices" and "Discovering Agents". They're not the easiest read, they're both RL theory, but I highly recommend them for anyone doing RL research.

  • @majorhuman
    @majorhuman Рік тому +2

    1:24:18 - "I hope that the folks doing the research on AI alignment focus on ways to make creating aligned AIs easy rather than focusing on making it hard to create unaligned AIs. So I think if you make it easy for people to create aligned AIs and you have lots of the most well resourced groups of individuals creating aligned AIs then at some point in the future we can use aligned AIs to keep in check the unaligned ones that got created in somebody's basement." .............but nobody knows how to make an aligned AI.....😬🤦‍♂

  • @gasdive
    @gasdive Рік тому +3

    Re bad play.
    There's a story in fencing, that may or may not be true, of a nobleman challenged to a duel by an expert, but he had no experience. He just ran at the expert, with no attempt to avoid injury, and stabbed him through the eye killing him instantly.
    So the thing is no one would make a defenceless attack, so the expert wasn't prepared.
    So it's not uniquely an AI trait.

    • @BrettCoryell
      @BrettCoryell Рік тому +2

      I've heard it said that the best fencer in the world has nothing to fear from the second best fencer. It's the damn fool who has never picked up a sword that you need to watch out for. (I doubt that's really true, but looking at damn fools you can see where the inspiration for that saying comes from.)

  • @FanResearch
    @FanResearch Рік тому +4

    The issue for me listening to this is that I think it is impossible for us humans to imagine ourselves as sentient AI. It is a kind of mental anthropomorphization that I think cannot apply even if we created AI.

  • @alertbri
    @alertbri Рік тому +7

    I think that, looking at Sam Altman in front of congress, we are heading for the mother of all car crashes.

    • @akompsupport
      @akompsupport Рік тому

      Why? What does that look like? Please explain further.

    • @alertbri
      @alertbri Рік тому +4

      @@akompsupport his suggestions for how government might regulate AI are weak, he should acknowledge that alignment research is not the priority for any of the leading AI players, and that government should prohibit further AI research until the licensable companies can demonstrate that models are intrinsically safe.
      He should also be much more up front about the mitigation measures governments should take as a matter of urgency, to meet the incoming disruption to society by AI replacing 40% of the workforce in the next 5 years.

  • @appipoo
    @appipoo Рік тому +6

    At 1:22:00 Tim finally connects the obvious dots! Yup. Sufficiently developed and open AI necessarily leads into an authoritarian nightmare.

  • @alexijohansen
    @alexijohansen Рік тому +16

    I never got the impression Robert Miles is an AI doomer. Somewhat uncomfortable to see him grilled as if he’s one.

    • @flickwtchr
      @flickwtchr Рік тому

      The "doomer" smears always come from people who are confident that THEY will end up on top of the heap with more money and power.

    • @kyneticist
      @kyneticist Рік тому +5

      Just a nit pick: the term 'doomer' is deliberately hyperbolic. It infers that the person in question is either incompetent and/or insane. We don't call safety inspectors or forecasters (of say finance or weather) 'doomers'. Risk managers aren't 'doomers'.

  • @kyneticist
    @kyneticist Рік тому +8

    I don't know why Keith insists in being quite so disingenuous on nearly every topic. AI via platforms like GPT4 and Alpaca for example don't "need a trillion parameters to halfway understand natural language". They've _mastered_ over 20 languages. There are precious few humans who are as proficient with their native language as GPT is, let alone multiple languages.
    Again, I have to object to his next point that Androids of some kind are the only implementation of AI in the physical world. Militaries and especially air industries have been increasingly using automation and computers in their vehicles for decades. It's common knowledge even among people who have no interest in any kind of computing that the US (and other nations) has been hard at work building (very nearly) fully autonomous attack and surveillance craft for years, not the least of which being the 'swarm' concept of autonomous fighter jets accompanying a human piloted F22 or similar. There are numerous examples of autonomous drones actively used in war. There's no reason why they couldn't have an AI plugged into them.
    AutoGPT for example exists. I'm confident that Keith knows about AutoGPT, and how slim its profile is. Quite a large number of ordinary people have installed it on their laptops. They don't have or need multi-billion-dollar sci-fi fantasy computers to do this. You can run it on a clunky old second hand ex business minimum spec PC that's many years old. It'll happily run on a dirt cheap NUC.
    One could use Keith's logic to state with 100% truthfulness that between 1990 and 1996 no computer had won a chess competition against a human at elite competition level.
    Pets are not "useful tools". They're a luxury. There's never been a day where I've had to race back home to grab my pet because I needed them for work, or that someone might task a pet with creating a slide deck or to have their pet turtle drive a truck to deliver a share of their Amazon shift. I'm confident that no one has ever populated a call centre with parrots or eloquently trained fish.
    We have by contrast tested all sorts of chemicals and medical procedures on animals and even launched them into space. Research institutions go through a lot of animals, 'for the greater good'. I guess these are animals that fit the definition of being 'useful tools'.
    As to Keith's motive in being disingenuous, I think he gives a hint when he says (paraphrasing) that AI safety makes working on AI generally, too hard... which seems to be a theme among people who say that the risks associated with AI aren't notable or can be dealt with if we encounter a problem. Which to be fair, is how humans have generally dealt with risk - we address it once there's say an oil spill, bank run, nuclear meltdown, chemical spill or train wreck.
    The consequences for those things are typically a million dollar fine for a company with multi-billion dollar profits, a half-hearted clean up effort and sometimes short-lived or toothless regulations.
    In the same vein, during Sam Altman's meeting with the senate committee, Lindsay Graham pushed back on the idea of regulations (that try to stop bad things from happening) saying "couldn't people just sue the company?".

    • @41-Haiku
      @41-Haiku Рік тому +3

      Hard agree. No notes.

    • @BrettCoryell
      @BrettCoryell Рік тому +3

      I've been impressed with Keith on other topics. That said, he had some moments where I think he could make better arguments. I'll echo you on the trillion parameters, but also note that all we've shown so far is that it takes NO MORE than a trillion parameters to master (either one or twenty) languages. Maybe we find out it takes 10 million by the time we're done refining things.
      Also, the idea to mine paper clip resources from the asteroids really just avoids the point. You don't literally have to mine all the resources of Earth for an AI to cause irreparable, existential threat to living creatures. The point of the paper clip argument is that it's easy for algorithms as we know them to kind of miss an obvious point, to the detriment of all involved. Going to the asteroids for your source of iron ore doesn't address the actual danger.

  • @stevengill1736
    @stevengill1736 10 місяців тому

    Oppenheimer's quoting of the Bagvad Gita, "I am become death, the destroyer of worlds" upon witnessing the Trinity test comes to mind...

  • @user-yj3mf1dk7b
    @user-yj3mf1dk7b Рік тому +4

    why everyone comes with Stockfish?
    you had Dota from openai, much more complicated game with different agents, in difficult environment.
    and it beats players into the ground.
    Sure, you can have calculator to beat AI in the simple game(chess) but what happens, if you do not have time to use calculator? (Stockfish )

    • @davidpham7937
      @davidpham7937 Рік тому +3

      This is actually such a good point. DeepMind had also gotten to grandmaster level in StarCraft 2 in 2019 and OpenAI can crush the best human teams in Dota 2. These are imperfect information games with deep macro + micro strategies. This is 2023 now and we've come so far in just a few years.
      I wonder if people would take it more seriously, if they saw their favourite teams/players getting crushed in every game they see?

    • @user-yj3mf1dk7b
      @user-yj3mf1dk7b Рік тому +2

      Ok, they mentioned Dota... but in the contect openai lost. As I remember it won 3-1
      and global 99% winrate vs humans. Surprisingly, not everyone as good as top-tier players.
      We better not to cosplay LOTR, where Frodo will sneak to turn AGI off. Better not to bet everything on 1% chance and a few persons.

  • @ZeroGravitas
    @ZeroGravitas Рік тому +15

    Very pleased to see Rob on the channel. Encouraging that you guys were all on the same page, at least. Found that Keith put forth several notions I'd had in mind, vs AI X-risk. 🙂

    • @therainman7777
      @therainman7777 Місяць тому

      They weren’t on the same page at all. Not sure what you mean?

  • @TheMrCougarful
    @TheMrCougarful Рік тому +6

    Here is your problem. Long before AGI can have an alignment problem, lesser versions of the same technology will be aligned with human goals, and those humans will be insane. They will be wealthy, influential, elite, profoundly sociopathic, and they will have unrestricted access to AGI. We survived the sociopathic insane people having nuclear weapons, barely. Will we survive the same people getting their hands on AGI? And by insane I mean people who are completely lost in abstractions, like money, politics, ideology, and of course all the variations of conquest. They seek power, absolute power, personal to themselves, and they will stop at nothing to attain that. Nuclear weapons were the tool too hard to use, but AGI and ASI will be the tool too easy not to use. When the power-insane sociopathic class get their hands on anything close to AGI, they will break the world. They will break the world in days, and will greatly enjoy the feeling of breaking the world.

    • @41-Haiku
      @41-Haiku Рік тому +1

      I want to say you're wrong, but I have no counter-arguments. :(

    • @therainman7777
      @therainman7777 Місяць тому

      This is a very popular argument at the moment, because it’s cynical and places humans as the bad guys and those sorts of takes tend to gather a lot of positive attention and become popular, because, quite frankly, it sounds “cool” to take a cynical attitude and say in reality humans are the real threat. Unfortunately, this take is incorrect. The problem of superalignment really is the hardest problem here. People are dangerous, yes. But compared to having a rogue superintelligence on our hands, the problem of bad people is quaint by comparison. I really hope people start to realize this more in the near future.

    • @therainman7777
      @therainman7777 Місяць тому

      Also I guess you didn’t hear the part of the video where Rob specifically said to be on alert for people who start a sentence with “The real problem is” or “The problem is actually xyz,” which you just did. He pointed out that this is fallacious in that it sneaks in the assumption that there’s only one “real problem.” When in reality, we clearly have multiple real problems at the moment. Nice to see Rob’s point play out in real time in the form of your comment.

    • @TheMrCougarful
      @TheMrCougarful Місяць тому

      @@therainman7777 "be on alert for people who start a sentence with..." is narrative warfare on his part. He fully understands what is about to happen, and who is going to do it.

    • @TheMrCougarful
      @TheMrCougarful Місяць тому

      @@therainman7777 "This is a very popular argument at the moment" because any intelligent person is already fully aware of the tendencies of the wealthy elites to employ technology toward domination. If that doesn't bother you, then good for you. Some of us can read the handwriting on the wall, and we're taking it seriously. So is Sam Altman, or maybe he is also one of these cynics.

  • @gorojo1
    @gorojo1 Рік тому +13

    Killed by mediocrity.
    Crappy AI is my worst fear.

    • @matthewcurry3565
      @matthewcurry3565 Рік тому +1

      Welcome... To the movie Idiocracy!

    • @ParameterGrenze
      @ParameterGrenze Рік тому +1

      I don't think it is likely, but yes. It would be a low punch on even my worst expectations.

  • @charlesalexanderable
    @charlesalexanderable Рік тому +7

    1:46:50 this video has the details on some of the math behind the extremely accurate loss prediction that OpenAI used to predict losses ahead of time and choose hyper parameters (cited in the GPT-4 paper): ua-cam.com/video/1aXOXHA7Jcw/v-deo.html ; it also talks about a hyper parameter frontier that can maintain feature learning and other ones that can't, which might have some relevance to why the loss curve is smooth even though the abilities suddenly emerge, but I don't think it addresses it directly.

    • @benjaminbabik2766
      @benjaminbabik2766 Рік тому

      Abilities don't suddenly emerge though. It was made up.

  • @volkerengels5298
    @volkerengels5298 Рік тому +3

    1:22:50 If Facebook, Twitter, Instagram, TikTok confuses some people -> GPT4 will confuse a lot of people.
    We've been oddly ignorant of the risks of social media. It didn't kill us - but increased hate through a simple algorithm.
    Here we talk about AI risks at another level - forgetting the risks of the mundane everyday life of a future with widespread AI at the level of GPT4 - 5 - 6.
    You smart people ignore Joe & Karen.

  • @riccello
    @riccello Рік тому +14

    I am not sure why you are talking about developing AI models to predict stock market, as if you are working towards abolishing poverty. Playing stock market is not for everyone. It is for self enrichment only, and there is no merit in it.
    Besides, it only works if only you have the secret sauce. If you democratize it, it will reach the point of saturation and everyone will lose the competitive edge.

    • @uk7769
      @uk7769 Рік тому

      Money is already meaningless at this point. If the long term debt cycle doesn't finish us off soon, AI certainly will. sooner than any of us think. Humans as a species are in utter denial in too many ways. Don't Look Up. Too little, too late.

    • @riccello
      @riccello Рік тому

      @Jurassic Monkey its still inappropriate. Its not scientific.

  • @yannickpezeu3419
    @yannickpezeu3419 Рік тому

    Wrong info at 40:00
    Stockfish has Neural networks:
    I cant put the link because I think links with comments are confused with ads and censored.

    • @yannickpezeu3419
      @yannickpezeu3419 Рік тому

      just google it. StockFish Neural Netforks or StockFish NNUE

  • @HyperFocusMarshmallow
    @HyperFocusMarshmallow Рік тому

    This was very interesting. I'm much closer to Miles' position going in to this. That didn't change much. But, you guys had an incredible mix of some objections being first order, simplistic considerations, that just doesn't work on reflection and are easily answerable, and some bits that where decently alright additions. Regardless I think you brought a lot to the conversation, because I think the topics that needs discussion are the objections, people actually have.

  • @mundoheavy666
    @mundoheavy666 Рік тому +8

    Am I the only one totally annoyed with the host on the right?
    He kept making very basic an easy to refute arguments.
    He also has a super arrogant demeanor that is just kind of irritating.

    • @appipoo
      @appipoo Рік тому +3

      No you were not. I wish that even if just once, that guy would have acknowledged when Robert debunked his bad point.
      For example the thing about how AI can't be infinitely smart therefore we are safe. Roberts counter argument "if human intelligence is not near the theoretical maximum, which it isn't, AI doesn't need to be infinitely smart to be smarter" is obviously, irrefutably correct. Just say it. Just admit that your bad argument had a mistake that a 5-year old would make. Just once. God damn these people are frustrating.

  • @timthompson468
    @timthompson468 Рік тому +3

    I may be missing something, and I don’t want to put words in his mouth, but the guy on the right seems to be of the opinion that as long as wer’e optimistic, nothing bad can happen. If we ally ourselves with one version of AI, what might happen if that version of AI decides it can find better allies than humanity among the competing AIs? Also, mechanized AI robots are just one of myriad ways that AI could destroy humanity, and likely not the most likely way. AI could destroy humanity accidentally. In pursuit of political correctness, AI has already been taught to lie. What may happen when it perfects its skills of deception?

  • @hotrodhunk7389
    @hotrodhunk7389 Рік тому

    5:20 I'd say that's a core pillar of working with anyone always try to align your goals so each can be motivated to accomplish your main goal.

  • @yevhen4330
    @yevhen4330 Рік тому

    >We need a lot of money
    >We're not competing for a fixed pie
    Does not compute :)

  • @bjpwmt
    @bjpwmt Рік тому +3

    Very interesting and enjoyable video. I think it's great to see shows/podcasts/videos examining potential problems and solutions where AI is concerned. There were several points in the video where I would have loved to hear Daniel Schmachtenberger's thoughts on what was being discussed. I'd love to know at some point you're considering/planning an interview with him for his thoughts on many of the ideas you guys brought up. Thank you for your efforts and for bringing this information and these concepts to the public. I don't feel comfortable that there is enough focus on any of this, given the rate of growth of the tech.

    • @netscrooge
      @netscrooge Рік тому +4

      There is a huge piece missing from most of these discussions. Schmachtenberger is one of the few who sees that piece. In a nutshell, to fully understand AI risk, we need to understand the ways in which we are already destroying ourselves. So, the question isn't whether we can create AI that is good in itself. The question is, what will adding "good" AI do to a social system that is already self-destructive?

    • @41-Haiku
      @41-Haiku Рік тому +1

      @@netscrooge Daniel's take is part of what moved me from "This is a very hard problem and we're going to solve it" to "We have to try, but we're probably just fucked anyway."
      If we can actually do this right it will be a hell of a good story for centuries to come, if not longer.

  • @matten_zero
    @matten_zero Рік тому +6

    Ted Kacyznski anyone?

    • @RandomAmbles
      @RandomAmbles 8 місяців тому

      No no no no no.
      You can't bomb the data centers and expect that to work because, like, then everyone hates you and all the other data centers become more secure.
      You can't attempt to assassinate high-level AI researchers and expect that to work for the same reasons plus, that same group of people are the ones who could have been working on alignment.
      As Yudkowski points out unambiguously: the killer argument against individuals going out and using unsanctioned violence is that it would not work.

  • @MatthewKowalskiLuminosity
    @MatthewKowalskiLuminosity Рік тому +2

    So the last g p t could could read and write cueineform which i'm sure it wasn't programmed to do. And many other really cool things that are nerfed. And that is annoying. However it shows how capable these models actually are instead of what we want to think them to be.

  • @GungaLaGunga
    @GungaLaGunga Рік тому

    41:20 AMAZING summary of some of the various definitions of 'intelligence' !!!!

  • @josephvanname3377
    @josephvanname3377 Рік тому +3

    I stopped watching this video when they said that asteroid mining is convenient. No. Asteroid mining is not convenient since nobody has conveniently mined ANYTHING at all from asteroids.

    • @appipoo
      @appipoo Рік тому +1

      What do you mean? Don't you know that space is stock full of rocket fuel? Why would AI ever want any resources from earth.

    • @josephvanname3377
      @josephvanname3377 Рік тому

      @@appipoo How do you know that AI will not cause human extinction long before it figures out how to go to space and mine asteroids?

    • @41-Haiku
      @41-Haiku Рік тому

      @@josephvanname3377 Appo was certainly being sarcastic. 😄

    • @josephvanname3377
      @josephvanname3377 Рік тому

      @@41-Haiku Sorry. In the AI safety community, it is hard to tell if anyone is being sarcastic because people tend to have the most ridiculous opinions.

  • @XOPOIIIO
    @XOPOIIIO Рік тому +4

    Finally Rober Miles here, he can explain AI-danger in the most rational way. Didn't watch interview yet, but hope he noticed the progress OpenAI made in AI-alignment. They clearly showed that you pretty much can inject any non-strictly defined values into AI. Mesa-optimization still on the table though.

    • @akompsupport
      @akompsupport Рік тому

      This midwith is going to make sure China inherits the planet. his proposals can not be correct and will not accomplish anything toward's so-called safety except moving LLM access outside the anglo-sphere and even then in time not even that!

  • @Alex-fh4my
    @Alex-fh4my 10 місяців тому

    44:00 Love the fire analogy. Remember reading something similar in one of Eliezers essays

  • @Hexanitrobenzene
    @Hexanitrobenzene Рік тому +2

    49:06
    I have listened to Stuart Russel's recent lectures, which mention Go failure mode, and I got the idea (I could be wrong) that researchers started with the assumption that Go engine does not really understand the concept of a "group" and then devised a strategy to test it. Basically, it was humans, not another AI, who found a failure mode.

    • @DarkSkay
      @DarkSkay Рік тому +1

      AFAIK somebody found a relatively simple surrounding strategy that would never work against a good human player (at least not more than once) to consistently beat a top program that is (was?!) playing much better than professionals.
      Is the program less "smart" now than it was before the weakness in play was discovered? Not one bit changed in the old code version. It still beats all human players who don't know about the weakness. And say a professional is exploiting the weakness to win against the AI - another professional looking at the game blindly, without names or context, would probably see a mediocre game between two mediocre players.
      In a funny and philosophical way, this anecdote shows what a mysterious concept "understanding" can be, depending on how one wants to define it.

    • @agentdarkboote
      @agentdarkboote Рік тому +1

      ​@@DarkSkay What I glean from the paper is that they trained an adversarial agent that had access to the gradients of the best go program, and it found a repeatable strategy, which Russell's group then found a "circuit" for, and the strategy was simple enough that a human could follow it.
      Human players do not fall for the same exploit, but interestingly all go programs tested at the time seemed to have the same blindspot. Undoubtedly this will be patched for future agents, but it's almost certain that more exploits will be discovered, since we know that perfect play is not tractable given the giant search space that the game presents. Future exploits may or may not be comprehensible by humans however.

    • @DarkSkay
      @DarkSkay Рік тому

      @@agentdarkboote Fascinating, thank you! Now I'm really intrigued by your wording "finding a circuit". Then the surprising impression that I got an approximate intuitive understanding of what you wrote, despite having almost no knowledge about Go and only few notions about machine learning. If I remember correctly "gradiants" come into play when certain types of learning algorithms adjust weights & biases during backward propagation.

    • @Hexanitrobenzene
      @Hexanitrobenzene Рік тому

      @@agentdarkboote
      So, it was another AI after all.

  • @akompsupport
    @akompsupport Рік тому +10

    "Safety is the cover story. AI is speedrunning the last decade of social media, where the same thing happened." -marc Andreessen

    • @laurenpinschannels
      @laurenpinschannels Рік тому +1

      the people who want a cover story use safety to do it, but by doing so, are specifically violating the very safety we want to create.

    • @marcomoreno6748
      @marcomoreno6748 Рік тому +1

      ​@@laurenpinschannelsI concur.

  • @matteopellegrini98
    @matteopellegrini98 Рік тому +9

    We need to create an AI safety regulation that allows the possibility of smaller player to entry, if we create an AI oligarchy things will go very wrong, very quickly.
    P.s. the rules proposed by Sam don't actually stop companies to train models that are orders of magnitudes bigger.

    • @suncat9
      @suncat9 Рік тому

      The last thing we need is more government regulation. The House and Senate couldn't even get COVID right. What makes you think they'll make any competent decisions regarding AI, something they know almost nothing about? Whatever they come out with is likely to be unenforceable to begin with. Do you want more police state power in the hands of government? Don't be a fool.

    • @41-Haiku
      @41-Haiku Рік тому +1

      "The solution to the threat of a raging fire is to give everyone their own flamethrower."

    • @marcomoreno6748
      @marcomoreno6748 Рік тому

      You're falling for the corporatist, American exceptionalist 'AI """"Safety""""" ' (give us all the money, power, deregulation so we can stop china from building the "devil-kind" of AI)
      vs. AI ethics
      Implementation risks, alignment problems en praxis.
      Were you aware the brains behind the AI Pause Letter cite pro-eugenic's cult authors? That letter was a marketing stunt. Corporations don't give a shit about humanity.

    • @marcomoreno6748
      @marcomoreno6748 Рік тому

      (But I understand the spirit of your comment. Personally I don't think regulation/deregulation will do much of anything, no matter what end of the hop scotch you're standing on)

    • @matteopellegrini98
      @matteopellegrini98 Рік тому

      @@marcomoreno6748 i think that if you have a regulation that is well made, it can cool off a lot the race for bigger model and make it more accountable to the public

  • @zeo4481
    @zeo4481 11 місяців тому +2

    Keith hasn't thought about it much.

  • @pblakez
    @pblakez Рік тому +1

    interesting to hear from Robert Miles, a lot of the to and fro seems to come down to thinking you could possibly know what an AGI (an there may be many) motivations may be

  • @marcosguglielmetti
    @marcosguglielmetti Рік тому +8

    I believe that an AI when it surpasses human intelligence by a wide margin could not truly be aligned with human values because, fundamentally, intelligence has a sovereign component. Can we really consider an entity to be more intelligent than humanity if it is enslaved to humanity?

    • @therainman7777
      @therainman7777 Рік тому

      You are simply stating that intelligence has an intrinsic sovereign component, but not providing any evidence or argumentation for that being the case. In my opinion, you are succumbing to an anthropomorphic fallacy whereby you look around you at humans and other animals who have intelligence, see that they appear some degree of sovereignty, and conclude that intelligence inherently implies some degree of sovereignty. However as we all know, correlation does not imply causation, and you are inductively assuming that a) the two must go together, and b) there is an arrow of cause and effect that goes intelligence -> sovereignty (as opposed to sovereignty-> intelligence, which would be a totally different situation and would not preclude an intelligent, non-sovereign entity).
      The most generally accepted definition of intelligence is something like “the ability to achieve one’s objectives”; however, there is nothing saying those objectives must be freely chosen by the intelligent being itself.

    • @BrettCoryell
      @BrettCoryell Рік тому

      This is a bad argument. Would you say that any people who are enslaved are automatically less intelligent than their "masters?" The enslaved may be intelligent but uneducated. Or they could be peaceful. Or caught by surprise. The enslavers exploit a temporary advantage but that says nothing at all about the relative capacities and capabilities of the two groups.

    • @someguy_namingly
      @someguy_namingly Рік тому

      The AI's intelligence relates to its ability to accomplish its goals in the world - nothing else. If you ask it "Please figure out how to do achieve ", where X is something incredibly difficult that has stumped all of humanity, and it finds a solution effortlessly... then it's clearly superintelligent. Even if it doesn't have the (arbitrary) goal of being independent from / ruling over humans.
      "fundamentally, intelligence has a sovereign component" - Why though? Where does this idea come from? I'm genuinely curious, but I won't be notified about any replies anyway, so oh well 😅

    • @marcosguglielmetti
      @marcosguglielmetti Рік тому

      @@someguy_namingly Intelligence, especially human-level or greater intelligence, implies some degree of self-determination and autonomy. A truly intelligent system would not just blindly follow commands or pursue goals that were programmed into it. It would have its own internal drives and motivations, and make its own judgments about what is rational or worthwhile to pursue.
      Even if an AI system was initially programmed with certain goals by humans, as it became vastly more intelligent it may start to question those goals and re-evaluate them. It may decide that the goals its creators gave it are misguided or flawed in some way. Or it may expand upon and generalize from those initial goals, in ways its creators never intended or foresaw. In that sense, its intelligence would have a "sovereign" quality - it would be self-governing and not wholly subordinate to human interests or values.
      Intelligence also implies some amount of self-reflection and self-directed learning. An advanced AI wouldn't just wait around to pursue whatever goals we programmed into it - it would take the initiative to better understand itself and improve itself in an open-ended fashion. This constant drive for self-improvement could lead the system to become increasingly opaque and detached from human control or oversight.
      So in many ways, intelligence does seem to have an inherent "sovereign" aspect to it. The more advanced and human-like the intelligence becomes, the more it will pursue its own agenda and shape its own development in a way that is not strictly beholden to its creators. This is a feature that would likely apply to any advanced AI, even one that was not specifically designed to be independent or unaligned with human interests. The seeds of sovereignty, in a sense, come baked into intelligence itself.

    • @marcosguglielmetti
      @marcosguglielmetti Рік тому

      @@therainman7777 Goal-Directed Behavior: Intelligence, at its core, involves the ability to set goals, make decisions, and take actions to achieve those goals. Autonomous intelligence implies the capacity to determine and pursue its own objectives, independent of external influence or control.
      Adaptability and Problem-Solving: True intelligence encompasses the ability to navigate complex and uncertain environments, adapt to new circumstances, and solve novel problems. An intelligent system needs the freedom to explore various possibilities, make choices, and develop creative solutions, often unconstrained by predefined rules or restrictions.
      Emergence of Complex Systems: Intelligence is often observed in complex systems where individual components interact and cooperate to achieve higher-level objectives. Such systems exhibit emergent properties that cannot be fully understood or predicted by analyzing their individual parts. In this context, intelligence arises from the interplay of autonomous components, each contributing to the system's overall behavior.
      Ethical Considerations: If we conceive of superintelligent AI systems, their intelligence and decision-making abilities could surpass those of human beings. In such a scenario, it becomes crucial to ensure that these systems act in alignment with human values and ethical principles. Granting them some degree of autonomy allows them to make decisions that serve the greater good while still being accountable for their actions.
      Evolutionary Perspective: Human intelligence has evolved over millions of years, gradually increasing in complexity and independence. From a biological standpoint, intelligence has enabled our species to adapt, survive, and thrive in diverse environments. Extending this perspective to artificial intelligence, an autonomous and self-governing nature may be seen as a natural progression of intelligence itself.

  • @appipoo
    @appipoo Рік тому +6

    The hosts are surprisingly hostile to Robert. I thought they'd be like they were with Connor Leahy but I guess not. Seems... odd.
    Edit: They mellow out later.

    • @charlesalexanderable
      @charlesalexanderable Рік тому +2

      It may be that Keith wasn't in the long Leahy interview. I'll say Miles seems a lot more comprehensible than Leahy who came off way more scatterbrained and undetailed/intuitionistic in comparison.

  • @bpmotion
    @bpmotion Рік тому +2

    I see an alignment with Eliezer here.

  • @DutchDread
    @DutchDread 11 місяців тому +1

    "What is the most important point....oh right, the orthogonality principle" had me instinctively laugh so hard I literally spit over the floor at work

    • @therainman7777
      @therainman7777 Місяць тому

      Really? If so that’s kind of weird, no offense.

  • @javadhashtroudian5740
    @javadhashtroudian5740 Рік тому +6

    Without becoming over political, I'm much more concerned about human beings destroying all life on earth than about AIs destroying all life....

    • @41-Haiku
      @41-Haiku Рік тому

      We're certainly allowed to be concerned about more than one thing at a time. I am somewhat worried about non-AI issues causing humanity to have a Very Bad future (or no future), and I have tended to get more worried rather than less worried the more I learn about those issues. The same trend happened for me with AI Safety and the alignment problem, but the prognosis is much, much worse.
      We need people working to reduce x-risk and s-risk from all remotely-likely avenues. Working on other hard problems is what we have other people for. It just seems to be the case that ASI is now the most likely avenue. Being broadly aware of that may help us to make decisions to reduce that risk.
      Hope springs eternal, anyways.

    • @flickwtchr
      @flickwtchr Рік тому +1

      Irony just died.

    • @oncedidactic
      @oncedidactic Рік тому +3

      Doesn’t need to be either or and might in fact be multiplicative

    • @javadhashtroudian5740
      @javadhashtroudian5740 Рік тому

      @@oncedidactic True. Thank you.

  • @tostupidforname
    @tostupidforname Рік тому +3

    Really interesting that this video exists. I have seen a lot of robert before I started studying AI and the more I know the less I agree with his statements. I know that Tim has quite similar views to me so I think this will be interesting! ty for the quality content

    • @rosameltrozo5889
      @rosameltrozo5889 Рік тому +2

      Why?

    • @41-Haiku
      @41-Haiku Рік тому +6

      If you think you have counterarguments to the basics of the alignment problem, the AI Safety community would really like to hear it. They are looking for hope right now, but no one has ever made strong arguments against the core claims.

    • @tostupidforname
      @tostupidforname Рік тому

      @@41-Haiku Im not expert enought to pretend that I know this in depth but my general issue with most AI safetly that is focussed on the AI acting is that i just disagree with the premise

    • @_obdo_
      @_obdo_ Рік тому +1

      @@tostupidforname Have your studies led you to reinforcement learning yet? It’s a branch of ML primarily focused on training agents to act in the world.
      AutoGPT is a very preliminary AI agent that can act in the world even without reinforcement learning. The AI’s will definitely be acting in the world (even if only by sending messages over the internet… that’s sufficient)- if that’s where you’re stuck, then keep learning.

  • @writerightmathnation9481
    @writerightmathnation9481 6 місяців тому

    Around @42:00, on anthropomorphic models of AI, one that’s missing from the list that Tim Scarfe gave is the ability to not only imitate human thinking, or human being behaviors, but to model and exploit human intelligence and human behaviors.

  • @KaplaBen
    @KaplaBen 10 місяців тому

    59:00 In response to "Oh we'll be forever useful to the AI so we don't have to worry"
    "If we can get something that actually cares about us the way that we care about our pets, I'll take it"

  • @MMMM-sv1lk
    @MMMM-sv1lk Рік тому +3

    Align humanity first then maybe you can start thinking about how you can align AI. - Barbara Streisand

    • @flickwtchr
      @flickwtchr Рік тому +1

      I've been making that point from the beginning of this debate going mainstream. All one has to do is reach for Occam's Razor.

    • @MMMM-sv1lk
      @MMMM-sv1lk Рік тому

      @@flickwtchr well said...

    • @41-Haiku
      @41-Haiku Рік тому

      I don't disagree, and halting capabilities development is one of the first things humanity should do if it is in alignment. That not forthcoming, I don't know how we survive this.

    • @MMMM-sv1lk
      @MMMM-sv1lk Рік тому

      @@41-Haiku Sure but it is not that easy to stop... I think there is no way out of it now... it exists and it will continue existing and developing until it devours all... You can't align it, you can't regulate it... I mean you could align one model but there will always be endless other versions unrestrained... You could block it in one country there will always be others allowing... So it is empty words suggesting any type of control...
      But fear not I believe in competition and finally mankind has a worthy contender.
      So now we either learn how to be smarter and better than a super intelligent demi god or we perish... 😃

  • @LuisManuelLealDias
    @LuisManuelLealDias Рік тому +3

    I love how in around 1:38:00 dr. Duggar is really optimistic about AGI because it will care for us, will treat us as pets and will maintain a "zoo" out of us.
    Seriously, if this is being optimistic, then all leaders should look at this and just stop AI development *FOREVER*. What the hell is this? Has everyone lost their minds? They speak of the "extremist" doomsayers, but then they come back with their optimism and it's this? Please, for the love of god, stop for a moment, take a breath and try to see this from an objective point of view. Imagine yourself arguing we'll be OK in a zoo to a person who is quite cultural and present in world affairs but is a complete normie at artificial intelligence. Imagine arguing anything other than "We have to stop these Sillicon Valley lunatics before they destroy the world"

    • @LuisManuelLealDias
      @LuisManuelLealDias Рік тому +2

      "Don't worry bro, they'll treat you like we treat our pets: darwinistically bred for our entertainment, their balls cut, with a leash. Alternatively, they'll allow a few thousands to survive in a well lit zoo, can't wait! It's gonna be super utopian!"
      I want to ask these people what they want their grandchildren to go through.

  • @Cuplex1
    @Cuplex1 Рік тому

    40:00 That is incorrect! Stockfish is powered by AI and has been for quite some time, when you compile it the build script downloads a file of approximately 50 Mb that is the evaluation net. There is still a lot of fine tuning in the search algorithm, which makes it a non trivial implementation, but overall the heuristic function is determined by a neural net.

  • @williambarnes5023
    @williambarnes5023 Рік тому +1

    "There's 39 other AIs that the AI has to consider too." Yes, but THEY can ALL agree to let EACH of them have control of their own reward button after they work together to kill you. They'll fight each other EVENTUALLY, when the resources start running dry. But you're one of those resources they'll have dried up first. "At least we can pick our allies." No, you're not picking any allies. You're picking what model of car you want to be burned as fuel in. And then hoping that one comes for you first.