Feeling the AGI with Flo Crivello

Поділитися
Вставка
  • Опубліковано 26 сер 2024

КОМЕНТАРІ • 27

  • @Cagrst
    @Cagrst 2 місяці тому +2

    Love this type of discussion, even more than the standard interviews which are airways great

  • @ilevakam316
    @ilevakam316 2 місяці тому +1

    I love the notion that Science = Consensus by experts. Pretty cool stuff.

  • @wonmoreminute
    @wonmoreminute 2 місяці тому +2

    On the backside of AGI, the advantage of a few months or even a few weeks towards ASI is potentially equal to years with respect to conventional technology.
    On the backside of ASI, an equal advantage could shrink to weeks or days (although a scenario where it would matter that your competition or adversary is only a few days behind you would have to be frighteningly urgent).
    But either way, if such an advantage could possibly come down to days or weeks at some point in the future, then the days and weeks right now towards reaching AGI and ASI are equally important (provided that's the objective).
    So, while it doesn't feel like it today, the urgency is now. Any CEO, military strategist, head of state, etc. is doomed to second place at best without this mindset. And this feels like a winner-take-all technology.

    • @GNARGNARHEAD
      @GNARGNARHEAD 2 місяці тому +2

      yeah.. it's the 'winner-take all' part of that that doesn't give me much concern, America, China, an ASI.. what would any of them do if they had absolute control, hell throw Iran in the mix 😆 an ASI is the wildcard, but for it to be a super intelligence, it has to be able to reason, and reality isn't a complete information game. Iran being a religious nation might impose such beliefs in some pretty funky ways, but philosophical texts written in the pre enlightenment are going to have some conflicts with the observations possible in a post industrial world, it would be a balancing act of navigating scripture and reality, and at some point reality is going to win. as far as the two super powers go, I don't see either of them just flipping the xenophobic switch ASAP, the level of control well implimented systems would provide I'd think would take gene editing for control off the table (as Zizek has stated that a CCP official has expressed the states intentions to him), it's impossible to be certain, but I think ideology would progress as it engages with the future... I 'unno 😀

  • @JazevoAudiosurf
    @JazevoAudiosurf 2 місяці тому +5

    i also don't see a story for an equilibrium. even if a stable status quo is reached, after some time it will escalate, reason being that we would always want to figure out what more intelligence can do. even if we get the utopia and reach the point where we can chill out, it will end quickly after. the future is sheer escalation for as long as i can grasp

    • @Sporkomat
      @Sporkomat 2 місяці тому +3

      Agree, i think we are in for a wild ride.

    • @thadgrace
      @thadgrace 2 місяці тому

      I’m hoping each one of us can choose (or at least perceive to choose) when we are finished accelerating… maybe the algorithm will let us off the train when each one of us individually has had enough. 🤷‍♂️

  • @InquilineKea
    @InquilineKea 2 місяці тому +2

    You may have to hand them Taiwan after ASI . the abundance makes it matter little

  • @ezzye
    @ezzye 2 місяці тому +2

    I like your ads and sponsorship.

  • @arinco3817
    @arinco3817 2 місяці тому +1

    Awesome interview

  • @GNARGNARHEAD
    @GNARGNARHEAD 2 місяці тому +1

    obviously it's not without risks, yet I can't help but be optimistic.. I think cyber security is a great example, these models are wizards of the conventional wisdom, but I see it as more of a rising tide, the fundamentals are easier to improve across the board, and are what's causing the vast majority of incidents

  • @palimondo
    @palimondo 2 місяці тому

    This video needs epilepsy trigger warning. Nathan, could you try stabilizing video of guests when they have a case of shaky cam on their desk? Also, please also disable the auto tracking feature on your Mac’s camera it’s quite distracting when it pans and zooms aggressively when you move around in your chat. (It’s off in this video but you used to use it often previously) Sorry for grumpiness, I love the great work you do here, I just wish you invested a bit more into the production values. Thank you!

  • @1Howdy1
    @1Howdy1 2 місяці тому +1

    Wouldn't it be awesome if Moore's law last 10 years, the ps1 would have been around for 15 years. How many AP1000's does it take to reach ASI?

  • @Sage16226
    @Sage16226 2 місяці тому +3

    "Its too hard" is not an argument someone leading a company backed by millions of dollars can make. That argument means that the board was right to kick him out of the company.

  • @charlesalexanderable
    @charlesalexanderable 2 місяці тому +1

    His webcam is so shaky it is hard to watch

  • @drhxa
    @drhxa 2 місяці тому +2

    All the people advocating for accelerating AI and open weights, you know what we'll get thanks to them? Societal collapse. That will set us back technologically 100+ years. This is the dumbest timeline that we're in.

  • @manslaughterinc.9135
    @manslaughterinc.9135 2 місяці тому +2

    The changes to the bill did not address the broader concerns of the community at large. Further, Wiener's dishonesty about listening to his constituents makes it difficult for anyone to support his position. I support AI regulation, but this regulation is poorly thought out. Further rumors of him lying about individuals who support the bill just compound against his reliability.

  • @GlennGaasland
    @GlennGaasland 2 місяці тому +1

    So many assumptions here…is there anything close to a consensus about what “general intelligence” even is? Or whether anything like that exists as a possibility? Not to mention what superhuman levels of this totally mysterious concept might actually be? Or assuming that obviously russian hackers will use the most advanced AI tools before Bank of America does…what??? Or the assumption that superhuman self-improving AGI (whatever that is supposed to even mean) can be achieved through purely automatic informational processes…do we have even a single example of a known phenomenon in nature that can do anything like this? These assumptions sound to me like a lot of wild religious superstition cloaked in tech woo speech.