Why Not Just: Think of AGI Like a Corporation?

Поділитися
Вставка
  • Опубліковано 28 тра 2024
  • Corporations are kind of like AIs, if you squint. How hard do you have to squint though, and is it worth it?
    In this video we ask: Are corporations artificial general superintelligences?
    Related:
    "What can AGI do? I/O and Speed" ( • What can AGI do? I/O a... )
    "Why Would AI Want to do Bad Things? Instrumental Convergence" ( • Why Would AI Want to d... )
    Media Sources:
    "SpaceX - How Not to Land an Orbital Rocket Booster" ( • How Not to Land an Orb... )
    Undertale - Turbosnail
    Clerks (1994)
    Zootopia (2016)
    AlphaGo (2017)
    Ready Player One (2018)
    With thanks to my excellent Patreon supporters:
    / robertskmiles
    Jordan Medina
    Jason Hise
    Pablo Eder
    Scott Worley
    JJ Hepboin
    Pedro A Ortega
    James McCuen
    Richárd Nagyfi
    Phil Moyer
    Alec Johnson
    Bobby Cold
    Clemens Arbesser
    Simon Strandgaard
    Jonatan R
    Michael Greve
    The Guru Of Vision
    David Tjäder
    Julius Brash
    Tom O'Connor
    Erik de Bruijn
    Robin Green
    Laura Olds
    Jon Halliday
    Paul Hobbs
    Jeroen De Dauw
    Tim Neilson
    Eric Scammell
    Igor Keller
    Ben Glanton
    Robert Sokolowski
    Jérôme Frossard
    Sean Gibat
    Sylvain Chevalier
    DGJono
    robertvanduursen
    Scott Stevens
    Dmitri Afanasjev
    Brian Sandberg
    Marcel Ward
    Andrew Weir
    Ben Archer
    Scott McCarthy
    Kabs Kabs Kabs
    Tendayi Mawushe
    Jannik Olbrich
    Anne Kohlbrenner
    Jussi Männistö
    Mr Fantastic
    Wr4thon
    Dave Tapley
    Archy de Berker
    Kevin
    Marc Pauly
    Joshua Pratt
    Gunnar Guðvarðarson
    Shevis Johnson
    Andy Kobre
    Brian Gillespie
    Martin Wind
    Peggy Youell
    Poker Chen
    Kees
    Darko Sperac
    Truls
    Paul Moffat
    Anders Öhrt
    Lupuleasa Ionuț
    Marco Tiraboschi
    Michael Kuhinica
    Fraser Cain
    Robin Scharf
    Oren Milman
    John Rees
    Shawn Hartsock
    Seth Brothwell
    Brian Goodrich
    Michael S McReynolds
    Clark Mitchell
    Kasper Schnack
    Michael Hunter
    Klemen Slavic
    Patrick Henderson
    / robertskmiles
  • Наука та технологія

КОМЕНТАРІ • 791

  • @MrGustaphe
    @MrGustaphe 5 років тому +822

    "Instead of working it out properly, I just simulated it a hundred thousand times" We prefer to call it a Monte Carlo method. Makes us sound less dumb.

    • @riccardoorlando2262
      @riccardoorlando2262 5 років тому +122

      Through the use of extended computational resources and our own implementation of the Monte Carlo algorithm, we have obtained the following.

    • @plapbandit
      @plapbandit 5 років тому +26

      Hey man, we're all friends here. Sometimes you've just gotta throw shit at the wall til something sticks. Merry Christmas!

    • @pafnutiytheartist
      @pafnutiytheartist 5 років тому +10

      Well it's the second best thing to actually working it out properly

    • @silberlinie
      @silberlinie 5 років тому +7

      ...simulatet it a few MILLION times...

    • @jonigazeboize_ziri6737
      @jonigazeboize_ziri6737 5 років тому +1

      How would a statistician solve this?

  • @dirm12
    @dirm12 5 років тому +307

    You are definitely a rocket surgeon. Don't let the haters put you down.

  • @user-go7mc4ez1d
    @user-go7mc4ez1d 5 років тому +588

    "Like Starcraft".
    That aged well....

    • @Qwerasd
      @Qwerasd 5 років тому +15

      Was about to comment this.

    • @CamaradaArdi
      @CamaradaArdi 5 років тому +6

      I don't even know if alphaStar had played vs. TLO by then, but I think it did.

    • @RobertMilesAI
      @RobertMilesAI  5 років тому +239

      It said 'for now'!

    • @guyincognito5663
      @guyincognito5663 5 років тому +8

      Robert Miles you lied, 640K is not enough for everyone!

    • @Zeuts85
      @Zeuts85 5 років тому +22

      I wouldn't say this has been demonstrated. So far AlphaStar can only play as and against Protoss, and it hasn't played any of the top pros. Don't get me wrong, I think Mana is an amazing player, but until it can consistently beat the likes of Stats, Classic, Hero, and Neeb (without resorting to super-human micro), then one can't really claim it has beaten humans at Starcraft.

  • @618361
    @618361 5 років тому +278

    For anyone interested in the statistics of the model in 6:16
    The cumulative distribution function (cdf) of the maximum of multiple random variables is, if they are all continuous random variables and independent of one another, the product of the cdfs. This can be used to solve analytically for the statistics he shows throughout the video:
    Start with the pdf (bell curve in this case) for the quality of one person's idea and integrate it to get the cdf of one person. Then, since each person is assumed to have the same statistics, multiply that cdf by itself N times, where N is the number of people working together on the idea. This gives you the cdf of the corporation. Finally, you can get the pdf of the corporation by taking the derivative of its cdf.
    For fun, if you do this for the population of the earth (7.5 billion) using his model (mean=100, st.dev=10) you get ideas with a 'goodness' quality of only around 164. If an AI can consistently suggest ideas with a goodness above 164, it will consistently outperform the entire human population working together.

    • @horatio3852
      @horatio3852 4 роки тому +4

      thx u))

    • @harry.tallbelt6707
      @harry.tallbelt6707 4 роки тому +9

      No, actually thank you , though

    • @cezarcatalin1406
      @cezarcatalin1406 4 роки тому +9

      That’s if the model you are using is correct... which might not be.
      Edit: Probably it’s wrong.

    • @drdca8263
      @drdca8263 4 роки тому +1

      Oh, multiplying the CDFs, that’s very nice. Thanks!

    • @618361
      @618361 4 роки тому +25

      @@cezarcatalin1406 That's a valid criticism. The part I felt most iffy about was the independence assumption. People don't suggest ideas in a vacuum, they are inspired by the ideas of others. So one smart idea can lead to another. It's also possible that individuals have a heavy tail distribution (like a power law perhaps) instead of a gaussian when it comes to ideas. This might capture the observation of paradigm-shattering brilliant ideas (like writing, the invention of 0, fourier decomposition, etc.). Both would serve to undermine my conclusion. That being said, I didn't want that to get in the way of the fun so I just went with those assumptions.

  • @sashaboydcom
    @sashaboydcom 4 роки тому +69

    Great video, but one thing I think you missed is that a corporation doesn't need any of its employees to know what works, it just needs to survive and make money.
    This means that the market as a whole can "know" things that individuals don't, since companies can be successful without fully understanding *why* they're successful, or fail without anyone knowing why they fail. Even if a company succeeds through pure accident, the next companies that come along will try to mimic that success, and one of *them* might succeed by pure accident, leading to the market as a whole "knowing" things that people don't.

    • @AtticusKarpenter
      @AtticusKarpenter Рік тому +3

      And.. thats pretty much not effective way of doing things, if we see modern HollyWoke, or Ubisoft

    • @glaslackjxe3447
      @glaslackjxe3447 Рік тому +2

      This can be seen as part of AI training, if a corporation has the wrong goal or wrong solution it will be outcompeted/fail and the companies that survive have better selected for successful ways to maximise profit

    • @monad_tcp
      @monad_tcp Рік тому

      @@AtticusKarpenter I bet those are not following market signals and not succeeding at the market, yet they survive from income from other "sources", the stupid ESG scores

    • @rdd90
      @rdd90 11 місяців тому

      This is true, but only for tasks with a small enough solution space that it's feasible to accidentally stumble across the correct solution. This is unlikely to be the case for sufficiently hard intellectual problems. Also, a superintelligence will likely be better at stumbling across solutions than corporations, since the overhead of spinning up a new instance of the AI will likely be less than that of starting a new company (especially in terms of time).

  • @yunikage
    @yunikage 4 роки тому +90

    "we're going to pretend corporations dont use AI"
    ah yes, and im going to assume a spherical cow....

    • @brumm0m3ntum94
      @brumm0m3ntum94 3 роки тому +12

      in a frictionless...

    • @Tomartyr
      @Tomartyr 2 роки тому +7

      vacuum

    • @linnthwin7315
      @linnthwin7315 Рік тому +1

      What do you mean my guy just avoided an infinite while loop

  • @TheOneMaddin
    @TheOneMaddin 5 років тому +44

    I have the feeling that AI safety research is the attempt to outsmart a (by definition) much smarter entity by using preparation time.

    • @oldvlognewtricks
      @oldvlognewtricks 4 роки тому +19

      I seem to remember Mr. Miles mentioning in several videos that trying to outsmart the AI is always doomed, and a stupid idea (my wording). Hence all the research into aligning AI goals with human interests and which goals are stable, rather than engaging in a cognitive arms race we would certainly lose.

    • @martinsmouter9321
      @martinsmouter9321 4 роки тому +2

      It's a try to get a history boost if we can have more time and resources we might be able to overwhelm it.
      A little bit like building a fort: you know bigger armies will come, so you build structures to help you be more efficient in fighting them off.

    • @augustday9483
      @augustday9483 Рік тому +2

      And it looks like we've run out of prep time. AGI is very close. And the pre-AGI that we have right now are already advanced enough to be dangerous.

  • @stevenneiman1554
    @stevenneiman1554 Рік тому +15

    I think one of the most important things to understand about both corporations and AIs is that as an agent's capabilities increase, its ability to do helpful things increases, but the risk of misalignment problems which cause it to do bad things increases faster. As an agent with goals grows, it becomes more able to seek its goals in undesirable ways, the efficacy of its actions increases, it becomes more likely to be able to recognize and conceal its misalignment, AND it becomes less likely you'll be able to stop it if you do discover a problem.

  • @petersmythe6462
    @petersmythe6462 5 років тому +447

    "You can't get a baby in less than 9 months by hiring two pregnant women."
    Wow we really do live in a society.

    • @williambarnes5023
      @williambarnes5023 5 років тому +72

      If you hire very pregnant women, you can get that baby pretty quick, actually.
      The 200 IQ move here is to go to the orphanage or southern border. You can just buy babies directly.

    • @e1123581321345589144
      @e1123581321345589144 5 років тому +14

      It they're already pregnant when you hire them, then yeah, it's quite possible

    • @dannygjk
      @dannygjk 5 років тому +13

      I think it's safe to assume that the quote is meant to be read as two women who just became pregnant.
      To assume otherwise is to assume that whoever said it doesn't have enough brain cells to be classified as a paramecium.

    • @isaackarjala7916
      @isaackarjala7916 4 роки тому +22

      It'd make more sense as "you can't get a baby in less than 9 months by knocking up two women"

    • @diabl2master
      @diabl2master 4 роки тому +4

      Oh shut up, you know what he meant

  • @flamencoprof
    @flamencoprof 5 років тому +40

    As a reader of Sci-Fi since the Sixties, I remember at the dawn of easily available computing power in the Eighties I wrote in my journal that the Military-Industrial complex might have a collective intelligence, but it would probably be that of a shark!
    I appreciate having such thoughtful material available on YT. Thanks for posting.

  • @visigrog
    @visigrog 5 років тому +46

    In most corporate settings, a few individuals get to pick which ideas are implemented. From experience, they are almost always not close to the best ideas.

  • @Primalmoon
    @Primalmoon 5 років тому +79

    Only took a month for the Starcraft example to become dated thanks to AlphaStar. >_

    • @spencerpowell9289
      @spencerpowell9289 4 роки тому +5

      AlphaStar arguably isn't at a superhuman level yet though(unless you let it cheat)

    • @rytan4516
      @rytan4516 4 роки тому +3

      @@spencerpowell9289 By now, AlphaStar is now beyond my skill, even with more limitations than myself.

  • @jonathanedwardgibson
    @jonathanedwardgibson 4 роки тому +6

    I’ve long thought Corporations are analog prototypes of AI lumbering across the centuries, faceless, undying, immortal, without moral compass as they clear-cut and plow-under down another region in their mad minimal operating rules.

    • @MrTomyCJ
      @MrTomyCJ Рік тому

      Corporations clearly do have a very important moral compass, and even Miles himself considers that so far humanity has been progressing. The fact some are corrupt doesn't mean corporations as a concept are intrinsecally bad, just like with humans in general.

  • @petersmythe6462
    @petersmythe6462 5 років тому +337

    Corporations still have basically human goals, just those of the bourgeoisie.
    AI can have very inhuman goals indeed.
    A corporation might bribe a goverment to send in the black helicopters and tanks to control your markets so it can enhance the livelihood of the shareholders.
    An AI might send in container ships full of nuclear bombs and then threaten your country's dentists with nuclear annihilation if they don't take everyone's teeth because its primary goal and only real purpose in life is to study teeth at large sample sizes.

    • @SA-bq3uy
      @SA-bq3uy 5 років тому +3

      Humans cannot have differing terminal goals, some are just in a better position to achieve them.

    • @fropps1
      @fropps1 5 років тому +46

      @@SA-bq3uy What do you mean by that? I feel like it's pretty self-evident that people can have different goals. I don't have "murdering people" as a terminal goal for example, but some people do.

    • @SA-bq3uy
      @SA-bq3uy 5 років тому +7

      @@fropps1 These are instrumental goals, not terminal goals. We all seek power whether we're willing to accept it or not.

    • @fropps1
      @fropps1 5 років тому +46

      @@SA-bq3uy If your argument is what I think it is then it's reductive to the point where the concept of terminal goals isn't useful anymore.
      I don't happen to agree with the idea that people inherently seek power, but if we take that as a given, you could say that the accumulation of power is an instrumental goal towards the goal of triggering the reward systems in the subject's brain.
      It is true that every terminal goal is arrived at by the same set of reward systems in the brain, but the fact that someone is compelled to do something because of their brain chemistry doesn't tell us anything useful.

    • @SA-bq3uy
      @SA-bq3uy 5 років тому +2

      @@fropps1 All organisms are evolutionarily selected according to the same capacities, the capacity to survive and the capacity to reproduce. The enhancement of either is what we call 'power'.

  • @Soumya_Mukherjee
    @Soumya_Mukherjee 5 років тому +105

    Great video Robert. See you again in 3 months.
    Seriously we need more of your videos. Love your channel.

  • @morkovija
    @morkovija 5 років тому +158

    Been a long time Rob! Glad to see you

    • @d007ization
      @d007ization 5 років тому +2

      Y'all are way more intelligent than I lol.

    • @shortcutDJ
      @shortcutDJ 5 років тому +1

      1,5 x speed = 1.5 more fun

    • @stevenmathews7621
      @stevenmathews7621 5 років тому +2

      @@shortcutDJ not sure about that..
      there might be diminishing returns on that ; P

    • @MrGustaphe
      @MrGustaphe 5 років тому +1

      @@shortcutDJ Surely it's 1.5 times as much fun.

    • @diabl2master
      @diabl2master 4 роки тому

      @@MrGustaphe No, simply 1.5 more units of fun.

  • @eclipz905
    @eclipz905 5 років тому +37

    Credits song: Bad Company

  • @EmilySucksAtGaming
    @EmilySucksAtGaming 4 роки тому +7

    "can you tell I'm not a rocket surgeon" I literally just got done playing KSP failing at reworking the internal components of my spacecraft

  • @acorn1014
    @acorn1014 4 роки тому +6

    I noticed an interesting quirk about the model that ignores the difficulty of finding the right task. If you take 361 people and have them all play Go, they can think of every move on the board, so they'd be able to beat our current AI, but this is not the case, so this is how important that ability to determine these things gets.

  • @jennylennings4551
    @jennylennings4551 5 років тому +6

    These videos deserve way more recognition. They are very well made and thought out.

  • @cherubin7th
    @cherubin7th 5 років тому +6

    A corporation can also do something like alphago's search tree. Many people have ideas and others improve on them in different directions. Bad directions are canceled until a very good path is found. Also many corporations in competition behave like a swarm intelligence. But still great video!

  • @ThePlayfulJoker
    @ThePlayfulJoker 4 роки тому +2

    This video is the kind that chanced my mind twice in only 14 minutes. I love the fact that it had a true discussion on the subject and not just a half-baked opinion.

  • @qmillomadeit
    @qmillomadeit 5 років тому +57

    I've always thought about the connection of corporations to AI as they do seek to seek to maximize their goals in the most efficient way. Glad you put out this very well thought out video :)

    • @dannygjk
      @dannygjk 5 років тому +3

      Corporations are far from efficient.

    • @ziquaftynny9285
      @ziquaftynny9285 4 роки тому +3

      @@dannygjk relative to what?

    • @dannygjk
      @dannygjk 4 роки тому +1

      @@ziquaftynny9285 Relative to AI ;)

    • @dannygjk
      @dannygjk 4 роки тому +1

      @Stale Bagelz Corporations are plagued with many of the issues that humanity has in general. For example power struggles within the corporation.

    • @PsychadelicoDuck
      @PsychadelicoDuck 4 роки тому +2

      @@dannygjk I think it's less "far from efficient", and more a stop-button/specification problem. The institutions (and the people making them up) are very good at maximizing the chances of their success, as given by the metrics that the broader systems (society/government for the institutions, and internal politics for the individuals) evaluate them by. The problems are, those metrics are not necessarily measuring what people think they are measuring (due to loopholes, outright lying, etc.), any attempts to change those metrics will be fought by the organizations currently benefiting from them, and that the fundamental social-economic system those original metrics were designed from presupposed that morality was either a non-factor or would arise naturally from selfish behavior. I'm also going to point out that the "general humanity issues" you mention are greatly exacerbated by that same set of problems.

  • @V1ctoria00
    @V1ctoria00 4 роки тому +1

    I binged several of your videos and I noticed this example about the rocket comes up another time. As well as the example just before it. Thought I was somehow rewatching one over again.

  • @DavenH
    @DavenH 5 років тому +16

    Every one of your videos kicks ass. Some of the most interesting material on the subject.

  • @DJHise
    @DJHise 5 років тому +8

    It took one month since this video was made, for AI to start crushing Starcraft professional players.
    (AlphaStar played both Dario Wunsch and Grzego rz Komincz, who were ranked 44th and 13th in the world respectively, were both beat 5 to 0.)

  • @Garbaz
    @Garbaz 5 років тому +2

    Very interesting! And I really like the little "fun bits" you edit into your videos!

  • @Ybalrid
    @Ybalrid 4 роки тому

    A coworker just shared this video with me. I had no idea you had your own UA-cam channel. I like Computerphile a lot, including your ML/AI videos so I instantly subscribed!

  • @joelkreissman6342
    @joelkreissman6342 4 роки тому +2

    I've said it before and I'll say it again, "bureaucracy is a human paperclip maximizer".
    Doesn't matter if it's a private corporation or governmental.

  • @blahblahblahblah2837
    @blahblahblahblah2837 4 роки тому +1

    Love the Dont Hug Me I'm Scared reference!
    Also _wow_ this has become my favourite channel. I wish I had found it 2 years ago

  • @thrallion
    @thrallion 5 років тому +2

    Once again wonderful video. One of the most interesting and well spoken channels on UA-cam!

  • @thatchessguy7072
    @thatchessguy7072 Рік тому +1

    @9:58 In answer to your rhetorical question, I need to reference the baduk games played between Alphago zero and Alphago master. Zero plays batshit crazy strategies where even the tiniest inaccuracies cause the position to spiral into catastrophe but zero still manages to win. Zero’s strategy does not look good to amateur players, nor to professional players, but it works, it just works. Watching these games feels like listening to two gods talk, one of which has gone mad.
    @10:02 ah… well we recognized move 37 as good after the AI showed that to us.

  • @Supreme_Lobster
    @Supreme_Lobster 5 років тому +10

    Those layers arent gonna stack by themselves

  • @TXWatson
    @TXWatson 5 років тому +4

    Looking forward to episode 2 of this! I've thought of the utility of this analogy in being that corporations, as intelligent nonhuman agents, give us the opportunity to experiment with designing utility functions that might be less harmful when implemented.

  • @DieBastler1234
    @DieBastler1234 5 років тому +2

    Content and presentation is brilliant, I'm sure matching audio and video quality will follow.
    Subbed :)

    • @RobertMilesAI
      @RobertMilesAI  4 роки тому

      Is this about the black and white bits at the start that are just using the phone's internal mic, or is the there a problem with my lav setup?

    • @theblinkingbrownie4654
      @theblinkingbrownie4654 3 місяці тому

      ​@@RobertMilesAIMaybe they watched the video before it finished processing the higher qualities, do you release videos before they're done fully processing?

  • @zzzzzzzzzzz6
    @zzzzzzzzzzz6 5 років тому

    I've always wondered this and have been pushing this idea... awesome to have a full video on it!
    Well not the 3 follow on conclusions, but the comparison to AI systems

  • @donaldhobson8873
    @donaldhobson8873 5 років тому +117

    This is all making the highly optimistic assumption that the people in the corporation are cooperating for the common good. In many organizations, everyone is behaving in a "stupid" way, but if they did something else, they would get fired.

    • @gasdive
      @gasdive 5 років тому +20

      Yes, but individual neurons are 'stupid'. Individual layers of a neutral net are 'stupid'

    • @stevenmathews7621
      @stevenmathews7621 5 років тому +5

      you might be missing Price's Law there.
      (an application of Zipf's Law)
      a small part (the √ of the workers) is working for the "common good"

    • @NXTangl
      @NXTangl 4 роки тому +16

      Also that the workers/CEOs are always aligned with shareholder maximization, as opposed to personal maximization. A company can destroy itself to empower a single person with money and often does.

    • @Gogglesofkrome
      @Gogglesofkrome 4 роки тому +2

      what is this 'common good,' anyway? is it some ideologically driven concept that differs entirely between all humans? Ironically it is this very 'common good' which drives many companies to do evil. After all, the road to hell is paved in human skulls and good intentions.

    • @NXTangl
      @NXTangl 4 роки тому +2

      @@Gogglesofkrome Common good of the shareholders in this case.

  • @buzz092
    @buzz092 5 років тому +2

    Excellent clerks reference! Also the video was outstanding as usual. :P

  • @jared0801
    @jared0801 5 років тому +1

    Great stuff, thank you so much for the video Rob

  • @faustin289
    @faustin289 4 роки тому +8

    "Evaluating solutions is easier than coming up with them"
    This is why I should earn more than my boss....I come up with all the ideas; the only thing he does is criticize and pick what idea to take forward!

    • @oldvlognewtricks
      @oldvlognewtricks 4 роки тому +9

      Your reasoning makes perfect sense, assuming people get paid based on the difficulty of their work. Oh, wait...

    • @pluto8404
      @pluto8404 4 роки тому +1

      Then become the boss if it is so easy.

    • @landonpowell6296
      @landonpowell6296 4 роки тому +3

      @@pluto8404
      Becoming the boss != Doing the boss's work.
      It's not easy to be born rich unless you already were.

    • @MrTomyCJ
      @MrTomyCJ Рік тому

      @@landonpowell6296 yeah the issue here is that in reality, the market doesn't directly reward intelligence or hard work, it rewards the satisfaction of consumer's needs. It seems unfair, but the alternative is much worse. Besides, intelligence and hard work may not be strictly necessary but they very often do put you in the right path. And someone being born lucky or rich doesn't really mean they are being unfair to others.

  • @Mr30friends
    @Mr30friends 5 років тому +5

    This video is actually amazing. Wow. So much useful information covered. And not just useful for people interested in AI. Most of this could apply anywhere from how businesses work to how different political systems work and to pretty much anything else.

  • @brunogarnier2855
    @brunogarnier2855 5 років тому +5

    Thank you for this great video.
    It could be interresting to go through the same exercise, but with the whole world's economy.
    and evaluate the "invisible hand of the market" as an artificial selection AI...
    Have a good week-end !

    • @MrTomyCJ
      @MrTomyCJ Рік тому

      I find that market's personification ("invisible hand") as a horrible mistake, as the whole point of the market is precisely that it's not a single entity, it doesn't have a particular intention. It's just a network of people with DIFFERENT ones.

  • @aenorist2431
    @aenorist2431 5 років тому +2

    They just prove that corporations are problems in similar ways.
    Not that somehow both are not a problem.
    Corporations have to be tightly controlled by the population (in the form of government) to utilize their potential without allowing their diverging goals to cause excessive damage.

  • @tho207
    @tho207 5 років тому +1

    should someone can bring AGI to us, they must be a person like you. your sensibleness and sensitivity is outstanding. I'll resume the video now, cheers

  • @Jack-Lack
    @Jack-Lack 3 роки тому +13

    I've already conjectured a year or two ago that corporations are AI, so of course I'm going to say yes. My reasoning is:
    -Corporations make decisions based on their board of directors, which is a hive mind of supposedly well-qualified, intellectual elites.
    -A corporate board will serve the goals of its shareholders, at the expense of everything else. Even if this means firing an employee because they believe they're losing $50/year on that employee, they care more about the $50 than the fact that the employee will be out of work. It also means they may choose not to recall a dangerous product if they think a recall would be the less profitable course of action. Corporate boards are so submissive to the goals of their shareholders that it is reminiscent of the AI who maximizes stamp-collecting at the expense of everything else, even if it destroys the world in the process (see fossil fuel companies who knew about climate change in the 1960's and buried the research on it).
    -AI superintelligence is supposed to have calculation resources that make it beyond human abilities, like a chess AI that is 900 elo rating points stronger than the best human. An AGI superintelligence might manifest superhuman abilities that go beyond just intelligence, but also its ability to generate revenue in a superhuman way and its ability to influence human opinion in a superhuman way. Large corporations also have unfathomable resources to execute their goals, which (in cases like Amazon, Apple, Microsoft, or IBM) can include tens or hundreds of thousands of laborers, countless elite intellectuals, the power to actually influence federal legislation through lobbying, the financial resources to drive their competition out of business or merge with them, and public relations departments that can influence public opinion.
    Really, I think that the way corporations behave is an almost exact model for how AGI would behave.

  • @Bootleg_Jones
    @Bootleg_Jones 5 років тому +8

    I love that you used XKCD's Up Goer Five as your example rocket blueprint. Definitely one of the best comic's Randall has ever put out.

  • @arthurguerra3832
    @arthurguerra3832 5 років тому

    Finally! I was tired of rewatching your old videos. haha Keep'em coming

  • @DYWYPI
    @DYWYPI Рік тому +1

    When thinking about AI as a metaphor for corporations, rather than the other way around, it's not necessarily the superhuman *intelligence* of the AI that is important or that makes them inherently dangerous - merely the fact that the intelligence makes it superhumanly *powerful*. Whether or not we accept that a corporation is significantly more intelligent than a human, they're fairly self-evidently significantly more powerful than one, with more ability to affect change in the world and to gather instrumental resources to increase that ability.

  • @willemvandebeek
    @willemvandebeek 5 років тому

    Merry Christmas Robert! :)

  • @LeoStaley
    @LeoStaley 5 років тому +2

    The video you did on computerphile about Asimov's e laws of robotics was the most impactful, consise expression of what the danger of AI development is. You made the point that "you have to solve ethics" and the fact that the people building it are going, "hold on, I'm just a computer programmer, I didn't sign up for that." those two things combined have stuck with me for years.

  • @lucbloom
    @lucbloom Рік тому

    Is that a Don’t Hug Me I’m Scared reference in the graph???
    Oh man so awesome.

  • @albirtarsha5370
    @albirtarsha5370 4 роки тому +1

    Anything You Can Do (Annie Get Your Gun) by Howard Keel, Betty Hutton
    AGI:
    Anything you can be, I can be greater.
    Sooner or later I'm greater than you.

  • @JM-us3fr
    @JM-us3fr 5 років тому +1

    This was my question! Thanks Rob for answering it

  • @EebstertheGreat
    @EebstertheGreat 3 роки тому +2

    At 7:14, the graph looks wrong. That histogram should resemble the graph of the probability density of a sample maximum. In general, if X₁, ..., Xₙ are independent and identically distributed random variables (i.e. a sample of size n) with cumulative distribution function Fₓ(x), then S = max{X₁, ..., Xₙ} has cumulative distribution function Fₛ(s) = [Fₓ(s)]ⁿ. So if each X as a probability density function fₓ(x) = Fₓ'(x), then S has probability density function fₛ(s) = n fₓ(s) [ Fₓ(s) ]ⁿ⁻¹ = n fₓ(s) [ ∫ fₓ(t) dt ]ⁿ⁻¹, where the integral is taken from -∞ to s.
    Here, we assumed the variables were normally distributed and set μ = 100 and σ = 20, so fₓ(x) = 1/(20√͞2͞π) exp(-(x-100)²/800), and thus fₛ(s) =
    n/(20√͞2͞π)ⁿ exp(-(s-100)²/800) [ ∫ exp(-(t-100)²/800) dt ]ⁿ⁻¹. The mean of this is E[S] = ∫s fₛ(s) ds, integrating over ℝ. Doing this numerically in the n=100 case gives a mean of 150.152. We can also make use of an approximate formula for large n: E[S] ≈ μ + σ Φ⁻¹((n-π/8)/(n-π/4+1)). For the given parameters and n=100, we get E[S] ≈ 100 + 20 Φ⁻¹((100-π/8)/(101-π/4)) ≈ 150.173. In either case, it is not plausible that you got a mean of 125 with n = 100, σ = 20 like you said. You must have used σ = 10, not σ = 20. That also explains why you wrote "σ = 20" between those vertical bars at 6:31. You probably meant that the distance between μ+σ and μ-σ was 20, i.e. σ = 10.

    • @RobertMilesAI
      @RobertMilesAI  3 роки тому +2

      That's correct! Though, since I picked the value for the standard deviation out of thin air, it can just be 10 instead and it doesn't affect the point I was trying to make

  • @thewhitefalcon8539
    @thewhitefalcon8539 Рік тому +1

    This diminishing returns stuff presumably also applies to electronic AGI. Look at the server resources they pour into GPT.

  • @AiakidesAkhilleus
    @AiakidesAkhilleus 5 років тому +1

    Great quality video, congratulations

  • @dantenotavailable
    @dantenotavailable 5 років тому +2

    Also don't forget communication costs. Scaling any human process to 1000 people becomes incredibly difficult due to overhead necessary to keeping everyone pointed in the same direction. Just documenting the suggestions from 1000 people is going to require a significant number of people and time, making sure you get the suggestions documented correctly and unambiguously and then evaluated is going to be a herculean task. It's not for no reason that most Agile Development techniques are most effective at 5 to 6 people and most advice for teams of size 10+ is "split into 2 teams that don't need to coordinate".

  • @ricardoabh3242
    @ricardoabh3242 4 роки тому

    Always really interesting and clear, with an nice open ended storyline

  • @adrianmiranda5531
    @adrianmiranda5531 5 років тому +9

    I just came here to say that I appreciated the Tom Lehrer reference. Keep up the great videos!

  • @petersmythe6462
    @petersmythe6462 Рік тому

    In some ways your "have each person generate an idea and pick the best" actually understates the problem. There are many types of problems, e.g. picking a move in chess, where ideas are easy to come up with but hard to evaluate.

  • @pierfonda
    @pierfonda 5 років тому +3

    Ahhh the move 37/Clerks reference!! Perfect

  • @natfrey6503
    @natfrey6503 5 років тому +1

    Might also consider some forms of government as behaving as AI, even societies for that matter. They can all go awry when citizens that go along with the "program" are convinced their actions are for a higher good. It's the conundrum of how good natured people can participate in the making of an avoidable calamity. But this brings in the question of human evil, or moral failing (as we see so much in large corporations), that even when quite innocuous on an individual level can be brutal when added up on a mass level.

  • @its.dan.eastwood
    @its.dan.eastwood 5 років тому

    Great video, thanks for sharing!

  • @petersmythe6462
    @petersmythe6462 4 роки тому +1

    The other important thing about corporations is they ultimately rely on people (their workers, customers, and supply chain) for them to function. This is why strikes are so effective and boycotts are also somewhat effective. The actual people that have to cooperate with the corporate leadership apparatus are the majority of human beings. Now, they don't have the choice to not cooperate with at least some corporations, but they can perfectly well agree not to carry out some directive.

  • @commenter3287
    @commenter3287 4 роки тому +1

    I have enjoyed your computerphile videos, but these scripted ones are even better. I had never heard the AI/Corporation comparison before, so in one succinct video you introduced me to a very interesting analogy and analyzed the problems with the analogy very well.

  • @leninalopez2912
    @leninalopez2912 5 років тому +24

    This is fast becoming even more cyberpunk than Neuromancer.

  • @xDeltaF1x
    @xDeltaF1x 4 роки тому +7

    I think the statistical model is a bit flawed/over simplified. Groups of humans don't just select the best idea from a pool but will often build upon those ideas to create new and better ones.

    • @CommanderPisces
      @CommanderPisces 4 роки тому

      Basically this just means that an "idea" can actually have several smaller components that can be improved upon. I think this is more than offset by the fact that (as discussed in the video) humans still can't select the best ideas even when they're presented.

  • @pacibrzank78
    @pacibrzank78 5 років тому +1

    Every haircut you had so far was on point

  • @GreenDayFanMT
    @GreenDayFanMT 5 років тому

    Very interessting topic. Thanks for this viewpoint

  • @bibasniba1832
    @bibasniba1832 4 роки тому

    Thank you for sharing!

  • @hayuseen6683
    @hayuseen6683 4 роки тому

    Wonderfully well considered problem and presented both bite-sized and expounded on.
    Logicians are some of my favorite people.

  • @Verrisin
    @Verrisin 5 років тому +2

    I like this idea overall. Somewhat smarter, but also somewhat slower. -- Controllable by other grouped-human entities (like governments)
    + a lot of other points, but I think that is kind of the main thing that differentiates it from ASI.

  • @limitless1692
    @limitless1692 5 років тому

    Wow this video was really interesting ..
    Thanks for creating it

  • @BM-bu4xd
    @BM-bu4xd 5 років тому

    Yeah! terrific. Much thanks

  • @loopuleasa
    @loopuleasa 5 років тому +1

    3:48
    Nice thinking adding "(for now)" text in the video, as Starcraft was already beatne by DeepMind a month ago

  • @cupcakearmy
    @cupcakearmy 5 років тому

    Amazing content again. Keep it up!

  • @definitelynotcole
    @definitelynotcole Рік тому

    Love that bit at the start.

  • @geraldkenneth119
    @geraldkenneth119 Рік тому

    The term I came up with that might fit a corporation is Ultra-Wide Artificial General Intelligence (UWAGI): an AGI that has genius-level (but not super intelligent) competence in far more areas than you’d expect of a single human, and which can do a very large number of AGI-level tasks at once, but is still not technically super intelligent in the traditional sense. I guess one way to think of it as being superintelligent in terms of “width” as opposed to “depth”

  • @user-jn4sw3iw4h
    @user-jn4sw3iw4h Рік тому

    10:38
    'It is not enough for someone in your corporation, to have a great idea. The people at the top need to recognize it's a great idea'.
    And even that description is optimistic. (given how the tactic here is: give an upper-bound and show even that isn't good enough. (a proven/useful engineering approach), I will not fault the video for this)
    in practice, there's usually also an alignment-problem with 'the top of a corporation'.

  • @MatthewStinar
    @MatthewStinar 5 років тому +9

    I love the use of XKCD Up Goer 5 diagram. 😀

  • @danieljensen2626
    @danieljensen2626 4 роки тому +1

    It definitely gets complicated when corporations use AI because we're getting fairly good at specialized intelligence, and if you have enough people working you can sort of stack specialized AI together the same way we stack specialized humans. It does still come down to the ability of decision makers to recognize good decisions though.

  • @ryanarmstrong2009
    @ryanarmstrong2009 4 роки тому

    That clerks reference for move 37 was phenomenal

  • @hexzyle
    @hexzyle 4 роки тому +2

    The flaw committed in this video is you assumed that there is always a better idea.
    Ideas have a cap for how good they are - the "perfect" solution, or with a problem with no solution because of conflicting parameters, the most effective solution is the peak of a bell curve. Often Humans come up with these ideas.

  • @ChibiRuah
    @ChibiRuah 4 роки тому +1

    I found this video very good as i thought about this and this expand the comparison and where it fails

  • @alexwood020589
    @alexwood020589 Рік тому

    I think another important point about idea qualities in large teams is the selection process. No team is coldly evaluating every idea and picking the objective best one. The people who can articulate their ideas best, or shout the loudest, or happen to be the CEO's son are the ones who's ideas get implemented.

  • @DarkestValar
    @DarkestValar 5 років тому +5

    Loved the XKCD reference 7:15 :D

  • @RoboBoddicker
    @RoboBoddicker 5 років тому

    Last year in the US, one of the big sporting goods retailers stopped carrying semi-automatic rifles and tightened restrictions on their gun sales in the wake of mass shootings. That decision was made solely by the CEO and it definitely didn't please a lot of shareholders. That's another big difference, I think, between corporations and AGI - the big decisions in a corporation are ultimately made by a small group of humans with human values. Not that we can always expect corporations to put morality over profits obviously, but executives can at least *recognize* an egregious situation and make moral judgments. An AGI doesn't have any such safeguards.
    Fantastic video as always, btw!

  • @nazgullinux6601
    @nazgullinux6601 5 років тому

    Loved the "Bad Company" acoustic at the end. As always, another 1-up to those not formally schooled whom routinely spout nonsensical "What-if's" at you as if they are the first person to think of the idea haha.

  • @Nayus
    @Nayus 5 років тому +9

    This guy presses randomize on his hair every new video.
    Great video btw.
    I think the most important points of this "why not just" will be on the second video, because to me it is very obvious that a corporation's 'values' and goals are very similar to humanity's, at least comparing it to what could potentially be the goals of a unsafe AGI. Like yes some corporations might not care about the enviroment or work conditions of their workers, or many other things that they disregard in pursuit of their (probably money related) goal, but there's no corporation on earth which goal is to destroy the planet. Or to kill every human. Or to control their brains (that could get away with it). Or who knows what other incredibly weird things an AGI might have as an instrumental goal that it will not hesitate to implement towards its terminal goal.
    You can't model AGI as a corporation because corporations are ultimately made of humans, so they will never separate their goals too much from human goals, while AGI does not have that limitation.

    • @yondaime500
      @yondaime500 5 років тому +5

      I think the abilities of humans and corporations are more relevant than their values. Human values are not really aligned in general, and some individual humans or organizations, given enough power, could do pretty awful things from the point of view of other humans. I know that because it has already happened, multiple times. In fact it's happening right now in many parts of the world. The only reason they don't do worse is because they can't. But an AGI could.
      That's why I sometimes feel like value alignment is a lost cause. Ok, maybe you can get the AGI to align with humans, but which humans? We're probably screwed either way.

    • @Nayus
      @Nayus 5 років тому

      @@yondaime500 I think it's a combination of the both. Like even if you say that corp are super smart, they aren't like "nuclear" Smart if you know what I mean.
      But I disagree with you on the *scale* of what we mean when we say misaligned. Like yes there're groups on our world that value really different stuff if you only take into scope the space of human values. But a AGI can have much much more varied values.
      Like for example from one side of the planet to another, you could say that they are really "far away", but only if you look at the world. If you look at the galaxy or the solar system, opposite points on the planet are relatively very close.
      I agree that even those differences are still very dangerous and important.

    • @bobsmithy3103
      @bobsmithy3103 5 років тому

      xD Kinda reminds me of the OpenAI dude with the blue hair presenting the robot hand.

    • @micaelstarfire8639
      @micaelstarfire8639 2 роки тому

      The history of corporate supported atrocities would suggest otherwise

  • @lobrundell4264
    @lobrundell4264 5 років тому +4

    Yeesss Rob is back as good as ever!

  • @artman40
    @artman40 5 років тому +1

    Better wording: I think corporation is a bit too narrow term. All kinds of institutions would fit, ranging from family union between two people, to gangs, to corporations to entire countries. And yes, institution can go rogue with the most extreme versions being where the institution benefits none of its members.

  • @peabnuts123
    @peabnuts123 Рік тому

    I agree with all the analysis in this video, but from a general standpoint it seems wild to even assert that corporations are like superintelligences when we have phrases like "design by committee" or "too many cooks" etc to describe the regression toward the mean when solving problems using a group of people. The differentiating factor of companies' ability to do things has always been person-power in my mind, definitely not their ability to generate solutions for problems. Anyone can have an idea, its the execution that counts. Some things require a lot of people to execute. This is IMO what gives organisations more capability than individuals.

  • @brr.petrovich
    @brr.petrovich 5 років тому

    We must have new video! Its a perfect time for it

  • @travcollier
    @travcollier 5 років тому

    A lot of the "sort of" points are very likely to apply to AGIs (at least in the early days) too.
    Anyways, we could certainly benefit from being better at aligning the goals and actions of corporations with humanity as a whole, and I think AI safety research could help with that while gaining insights about future AGIs.

  • @ianprado1488
    @ianprado1488 5 років тому

    Such a creative discussion

  • @ehsn
    @ehsn 3 роки тому +1

    I'm still at 5:00 but I want to point out that like AI, a corporation whose objetive is to win at Starcraft probably beats an individual at looking at ways to cheat or rig the competition, setting aside legal things like analyzing their oponents beforehand in order to select a good strategy and player against them. If they can comunicate during play they might be able to have better awareness too.

    • @ehsn
      @ehsn 3 роки тому

      Ok, some things got adressed later.

  • @hikaroto2791
    @hikaroto2791 2 роки тому

    this was an astoundingly interesting video

  • @TheConfusled
    @TheConfusled 5 років тому

    Yay a new video. Mighty thanks to you

  • @ToriKo_
    @ToriKo_ 5 років тому +2

    I just want to say thanks for making these videos! Also nice Undertale reference

  • @ninjagraphics1
    @ninjagraphics1 5 років тому

    Thanks so much for this