What can AGI do? I/O and Speed

Поділитися
Вставка
  • Опубліковано 7 лют 2025

КОМЕНТАРІ • 464

  • @stribika0
    @stribika0 6 років тому +111

    Being as good as the best human at every task is kind of superintelligent in itself. It's like the best scientists and engineers, but in every field. It doesn't have to talk to specialists. It doesn't have to buy anything, because it can make those things, and better. It probably wouldn't have to wait a year until the better chips come out.

    • @jeremybuckets
      @jeremybuckets Рік тому +2

      @Пётр Бойков yes, but it's possible that superhuman depth of intelligence would emerge from superhuman breadth of intelligence. "Breadth of intelligence" is not a *perfect* way to analyze the function of a corporation, it's just the best one available if you're trying to force the AGI comparison. A team of lots of humans don't synthesize information as efficiently as a single person would if they had all the skills themselves. A team of humans cooperates at the bumbling speed of natural language whereas an AGI combines all of its competencies at, essentially, the speed of light. Additionally, for obvious reasons, no corporation has ever tried collecting every expert in every field just to see what might happen if they all get in the same room together. We have no idea what might emerge from that synthesis.

  • @EpicWink
    @EpicWink 7 років тому +390

    I am happy now that I know I am a superhuman when I hold a calculator
    Unfortunately, I'm in the presence of gods when other humans run around with mobile phones

    • @livedandletdie
      @livedandletdie 7 років тому +22

      Laurie O you should then be glad that the mortality rate of people holding calculators is lower than the mortality rate of people holding mobile phones.

    • @sk8rdman
      @sk8rdman 7 років тому +6

      More like idiots with god-like tools.

    • @the1exnay
      @the1exnay 6 років тому +24

      Smartphones have made us all demigods. But when everyone's a demigod- who cares.
      Worship me for i can summon the entirety of human knowledge from anywhere- oh wait so can everyone. Still useful, but less fun.

    • @adamfreed2291
      @adamfreed2291 5 років тому +13

      When everyone's super, no one is.

    • @chrisw7347
      @chrisw7347 5 років тому +2

      @@sk8rdman "Idiots" suggests that the stupid are the most likely to be armed with the most powerful technology. But this isn't how natural selection works in human social hierarchies. Those who will wield god-like tools will be the most successfully psychopathic(charming, domineering, callous, manipulative, deceptive, self-absorbed, etc). The future is essentially like real life representations of the god of the Old Testament running around and manipulating the world to suit their needs with technology indistinguishable from magic. You won't know what hit you in the same way a Roman peasant didn't know they were being stupefied by lead saturated water from the Roman aqueduct while the rulers schemed for conquest.

  • @artemonstrick
    @artemonstrick 7 років тому +183

    This is THE best channel on YT right now covering AGI topics.

    • @bing0bongo
      @bing0bongo 4 роки тому +14

      Still the best 2+ years later :]

    • @phisicoloco
      @phisicoloco 4 роки тому +3

      @@bing0bongo Still 1 month later

    • @AbsaluteWreckage
      @AbsaluteWreckage 3 роки тому +6

      @@phisicoloco still some time later

    • @inthefade
      @inthefade 3 роки тому +4

      And still...

    • @ddingopants
      @ddingopants 2 роки тому +3

      @@inthefade Still.
      Might continue until AGI, either because alignment is solved and can explain itself better, or because Robert has been Roko's Basilisked.

  • @andytroo
    @andytroo 7 років тому +7

    thumbs up just for the 'general intelligence has to be parallelizable, because the human mind has to be'

  • @mehashi
    @mehashi 7 років тому +120

    I love the Ukelele "Harder better faster stronger" :p
    Please release all your crazy Uke' ditties at some point ^.^
    Great video as ever!

  • @Nellak2011
    @Nellak2011 5 років тому +148

    "The software developer that can percieve data directly without converting to symbols without visually reading it. And is about as smart as the smartest developers."
    Basically an Assembly programmer in a nutshell..

    • @huckthatdish
      @huckthatdish 5 років тому +37

      Connor Keenum and we know they aren’t real humans, looks like we already have AGI

    • @Nellak2011
      @Nellak2011 5 років тому +30

      @@huckthatdish Exactly. No Human can learn Assembly, it's obviously too hard. lol
      # WakeupSheeple

    • @KuraIthys
      @KuraIthys 5 років тому +19

      -looks up from writing assembly on an old 8 bit microcomputer¬
      hmmh? Did someone say something?
      Eh. Probably not important. ~goes back to pointless nostalgia coding-

    • @Deserthacker
      @Deserthacker 4 роки тому +12

      @@Nellak2011 Strangely, I only remember "fever dreams" of the time when I was allegedly taught assembly in university. It's definitely aliens.

    • @davidwuhrer6704
      @davidwuhrer6704 4 роки тому +19

      Assembly uses lots more symbols to represent simple concepts than high level languages.

  • @marouaneh175
    @marouaneh175 7 років тому +163

    Another thing is that AGI will probably communicate at eventually many gigabytes per second, the equivalent of reciting the entire english Wikipedia to your friend in less than a minute. AGI won't have to deal with many languages each with arbitrary rules and meanings being lost in ambiguous terminology and translation errors. To solve hard problems, humans always cluster together in small teams, and structures of multiple teams, all the way to the community that communicates with research papers taking months to publish. Imagine a thousand Einstein level AGI working on physics problems together in perfect instantaneous communication.

    • @13thxenos
      @13thxenos 7 років тому +18

      Imagine the Manhattan project with a thousand Einstein level AGI working on it together in perfect instantaneous communication.

    • @iwikal
      @iwikal 7 років тому +43

      My guess is, if you have two AGI, and they are decide to cooperate, you essentially get one AGI with double the brainpower. That's how efficiently they could communicate.

    • @busTedOaS
      @busTedOaS 7 років тому +18

      Working as a group has complications besides bandwidth. Each member sees a different part of the same problem (otherwise we're just adding redundancy), so they will all come up with different solutions, too. At the very least you need a mechanism for consensus, and what tells us this won't be just as messy as it is for humans? We have not solved collective decision making at all, in fact we are hoping for AGI to help us with that.
      How many scientific breakthroughs have been made by a large group of scientists, and how many were made by a single visionary? The answer should make you think.

    • @aidenbrooks4859
      @aidenbrooks4859 7 років тому +22

      The AGIs would not discriminate between information "they" found, or that "someone else found". They wouldn't really have biases the way humans do. Thus, if information is shared freely amongst the AIs, at some point they will all collectively have enough of the information to agree. They can just continuously share information with one another until agreement is found. This could slow it down, but it won't break it.

    • @NathanTAK
      @NathanTAK 7 років тому +5

      Strictly, a dump of the entirety of Wikipedia- including all the history, which is relatively important to the whole shebang- is 10 TB (and I don't know if that even includes images, or whether they matter); to recite 10 TiB in 1 minute, you'd need to communicate at 170 GiB/s (> several)

  • @GodOfReality
    @GodOfReality 4 роки тому +18

    There's also a very significant point that perfect memory is essentially the same as super intelligence. An AGI that can spend a few hours/days reading all open source code in existence, and then it'll start to update itself, will do so with perfect recall of all code to ever exist. Which means it will never really make mistakes.

  • @morkovija
    @morkovija 7 років тому +146

    *Clapping intensifies* Thanks Rob, great video

  • @LoneStarVII
    @LoneStarVII 7 років тому +76

    I like your humor.

  • @togusa75
    @togusa75 7 років тому +1

    There's an old short story by Stanislaw Lem entitled "Trurl's Electronic Bard" where Trurl builds, as you may have guessed, an electronic bard. It was so good at creating poems that, by playing them, it could incapacitate anyone with the overwhelming feelings they caused. They decided to dismantle it, but any technician approacing the machine would be brought to tears by a few sad ballads. So they sent deaf technicians, and the machine used... pantomime.
    The story ends just as they planned to use bombs to blow the thing up from a distance, but somebody from another planet came, bought the machine and brought to their home instead.
    The moral here is that a superintelligent machine wouldn't necessarily need a physical "interface" to make harm.

    • @LeoMRogers
      @LeoMRogers 7 років тому +2

      togusa75 "the machine used pantomime"
      Oh no it didn't!

    • @togusa75
      @togusa75 7 років тому

      what do you mean?

    • @LeoMRogers
      @LeoMRogers 7 років тому +2

      en.wikipedia.org/wiki/Pantomime
      Mime is not actually an abbreviation of pantomime, though they are etymologically connected. Pantomime is a type of comedic stage production in the UK. One of the staples of pantomime is the call and response, often "oh yes it is" - "oh no it isn't". First pantomime example I found: watch?v=adb3Sfo__nE

  • @Yezpahr
    @Yezpahr 11 місяців тому +3

    Too bad you stopped using this channel. The world needs you.

  • @zhangalex734
    @zhangalex734 4 роки тому +2

    Imagine living quarantine, but in super slow motion because you can think in 10x....

  • @benjaminbrady2385
    @benjaminbrady2385 5 років тому +35

    It's time to overclock the meat

    • @RyanBissell
      @RyanBissell 4 роки тому +4

      The flame that burns twice as bright burns half as long.

  • @knightshousegames
    @knightshousegames 5 років тому +18

    6:17 Looking at an image and figuring out if it has a traffic light in it or not. Got 'em.

    • @Kishmond
      @Kishmond 5 років тому +1

      That's what I thought too, but are computers as good as humans at image recognition? I don't think they are yet.

    • @imwacc0834
      @imwacc0834 4 роки тому

      I said driving a car... but I guess at a base level, it's the same thing.

    • @raspberryjam
      @raspberryjam 4 роки тому

      @@Kishmond they can be. like most ai it's very narrow but the point is its been done

    • @eragon78
      @eragon78 3 роки тому +3

      @@Kishmond not generalized no, but for certain trained data sets yes.
      Computers cannot recognize generalized images as well as humans. But if they are trained specifically to recognize specific things, they can do it as well as humans and better. And in THESE cases they are much faster than humans at doing it too.

  • @nickhbt
    @nickhbt 3 місяці тому +1

    Outstanding explainer which has lasted very well, it effectively describes exactly how open AI managed to succeed by throwing more computation at the problem. and you pointed in that direction several years before they did.

  • @plyr2
    @plyr2 7 років тому +1

    What the hell, I've been subscribed to you (and computerphile) forever and not once have I seen you appear in my subscriptions feed over the last 2 months. I just found this in the recommended feed and still couldn't see it in the subs feed. :(

  • @lindemann06
    @lindemann06 7 років тому

    Since May of this year, I've been teaching improvisational theater to a group of 28 senior citizens in my 55+ community in San Marcos, California. The goal is to collectively create a 2-act play that will be performed in mid-March. Any member of the community was welcome to join the class, regardless of age, theatrical training or experience. As a result, my students range from 55 - 91 in age, and only a few have had any sort of theater classes or stage experience, with the exception of some of the dancers in the group. It's been fascinating watching them learn. The key to getting them to open up to the idea that they might be able to improvise on stage was to convince them that they improvise as a matter of course in everyday life. For the most part, they have exceeded my wildest hopes in unlocking talents and skills even they had no idea they possessed. Only a few are still struggling, because their brains can't seem to "bend" enough.
    What enables the majority to successfully improvise dialogue and movement is, aside from mental flexibility, lifetimes of experiences - not the least of which are emotional in nature - that have honed their ability to empathize. Those with minimal capacity to empathize simply can't convincingly improvise. And sometimes, as any SNL aficionado knows, an improvisation simply falls flat, regardless of the talents, skills, training and experience of the performers.
    While watching this video, I was struck with the notion that improvisation might be the key test of success for a true AI. It's not processing power or speed, both of which are constantly evolving commodities in the computer world, and as you point out, in theory, there is no theoretical barrier to "parallelizing" processors in AI development. But how can an AI learn to empathize with human emotions and feelings, without the capacity to experience emotions? I don't think simulated feelings would lead to true empathy, and if I'm right, an AI-controlled machine, however human-like in every other way, will not be capable of convincing improvisation. If that's true, then AI-controlled machines will continually "get it wrong" in interacting with human machines, and that means they will accidentally harm human beings, even if they consciously attempt to obey Isaac Asimov's 3 laws of robotics.

  • @AvidThinking
    @AvidThinking 7 років тому

    This is an amazing video! I love watching the quality continually progress. Please do not take down your old videos or delete them from the world. It's such an amazing progression. *keep it up!*

  • @kennys1881
    @kennys1881 6 років тому +38

    "You cant get a baby in less than 9 months by hiring two pregnant women."

  • @rickystrapp3056
    @rickystrapp3056 7 років тому +3

    Consistently good output from you Rob, enjoy these videos on a fascinating topic

  • @SupLuiKir
    @SupLuiKir 5 років тому +3

    6:05 This was a really powerful ability that the MC in the Japanese light novel 'So I'm a Spider, So What?' received. She had a brain do planning, another to control her body, another to do the processing required to cast magic spells, etc.
    6:40 Accel World in a nutshell

    • @zbdfhg
      @zbdfhg 5 років тому

      Thanks for the recommendation

    • @tolbryntheix4135
      @tolbryntheix4135 4 роки тому

      Another one to add would be "Chrysalis", a novel where the protagonist is an ant and "evolves" more brains in order to split up the heavy mental workload of casting magic. He also ends up making a colony of extremely industious, highly intellingent, and highly cooperative ants, which we all know will obtain global domination sooner or later.
      Its pretty fun and fascinating to read.

  • @Nayus
    @Nayus 7 років тому +6

    When I felt that the two audios were coming I paused the video, closed my eyes and play it, and could understand what 2/3 of you said... but then immediatly you said the thing about closing your eyes. The player has been played.
    Great video btw

  • @antoniocalado7101
    @antoniocalado7101 7 років тому +11

    Fastest 10 minutes and 40 seconds of my life. Great video as usual.

    • @IPA300
      @IPA300 5 років тому

      You must have gained more processing power, good job!

  • @AZTECMAN
    @AZTECMAN 6 років тому +1

    I once responded to two questions which were asked simultaneously, one in each ear. My brain managed to make sense out of both questions and answer each party. I don't believe that I am unique in this.

  • @davidwestwoodharrison
    @davidwestwoodharrison 7 років тому +4

    "Parallellizable Algorithm" is my new favourite pair of words.

    • @mrpedrobraga
      @mrpedrobraga 3 роки тому

      ParAllelgollirizathmble is what you mean

  • @leafykille
    @leafykille Рік тому

    5.44 it may have been really hard to do but it worked really well and I watched it several times to hear all the bits then again to pause and read this note that was up for less than a second. Nice one :)

  • @DrDress
    @DrDress 7 років тому +2

    Aaaaah. I needed my AI fix. It's been far too long since the last one... No pressure Rob, I'm just a poor junky, because this topic is sooooo f***king interesting.

  • @zenmonke
    @zenmonke 7 років тому +2

    Great Video ! I am glad, that i found your channel.
    Almost in every conversation I have about AI i mention your example of the stamp collecting AI. :)

  • @thepurityofchaos
    @thepurityofchaos 5 років тому +8

    That moment when you managed to simultaneously process both ears separately by using both hemispheres of your brain

  • @Deeredman4
    @Deeredman4 6 років тому +2

    Also; AGI will be able to share experiences with each other. Meaning; if 1 AGI learns how to do a task, all AGI could potentially have learned to do that task. That AND because it is faster, assuming it is self modifying; it can easily re-write it's code again and again millions of times over before we have finished our first cup of coffee meaning that if it starts out as smart as humans; it won't stay that way for long.

  • @JulianDanzerHAL9001
    @JulianDanzerHAL9001 4 роки тому +1

    imagine a war beween a paperclip maximizer and a stamp collector

  • @ZachAgape
    @ZachAgape 4 роки тому +1

    One of my favourites! Very good vid, keep it up! :)

  • @TheManinBlack9054
    @TheManinBlack9054 10 місяців тому +1

    We need you back, man

  • @MichaelDeeringMHC
    @MichaelDeeringMHC 7 років тому +1

    I can't wait for the video on what an AGI without a body can do.

  • @ekkehardehrenstein180
    @ekkehardehrenstein180 5 років тому +1

    You keep inspiring and impressing me. Thank you for your work and self.

  • @endsliceofbread4383
    @endsliceofbread4383 7 років тому +2

    Boiiiii new Robert miles video, love your stuff, keep it up ❤❤

  • @Robinsonero
    @Robinsonero 4 роки тому +1

    'expirence the data directly' is an interesting concept. I want to argue that the slow, meat based, hunter gatherer clunkiness of our system is what makes space for our cognition. A calculator is directly processing button inputs with arithmetical precision, but I still think you and I have a better grasp on what the numbers mean.

    • @theexchipmunk
      @theexchipmunk 4 роки тому

      The whole "being lead to stuff by our instincts thing" is a very VERY interesting thought. It has massive implications for intelligence, culture and technology. It is for example one of the possible solutions for the frame paradox.

    • @kaorutanaka803
      @kaorutanaka803 4 роки тому

      @@theexchipmunk Ah yes, the frame paradox, my favorite paradox.

    • @theexchipmunk
      @theexchipmunk 4 роки тому

      Kaoru Tanaka DAMN IT! SPELLING, MY ONLY WEAKNESS!!!

  • @shortcutDJ
    @shortcutDJ 7 років тому +1

    if i may comment off topic here, your hair and style have greatly improved since that video.

  • @AsbjornOlling
    @AsbjornOlling 7 років тому

    I was basically already clapping the spacebar, before you asked me to.
    Great video again - this one was very clear and concise.

    • @RobertMilesAI
      @RobertMilesAI  7 років тому +1

      Try hitting "." on a paused video :)

  • @filipefigueira6889
    @filipefigueira6889 4 місяці тому +1

    such a fucking genius, never cease to amaze me, i know you have a lot on your plate right now, but would love to keep hearing feedback from you about current events.

  • @miss_inputs
    @miss_inputs Рік тому +1

    Why do I feel like I'm being personally called out by 6:57

  • @antontunce425
    @antontunce425 5 років тому

    did you go your own way due to you popularity at the time of computerphile ? glad to see you doing your own stuff, really appreciated your talks on computerphile. new sub.

  • @hikaroto2791
    @hikaroto2791 3 роки тому

    when you started to talk about a calculator in the brain to pop in answers inmediately or perceiving code as a sensation and a feeling rather than text, or writting programs with the speed of thought. my dopamine leves reached ecstasy, that is beyond heaven pleasure levels ahhaha
    edit: 5:34 i had to view that 3 times! and i was able to understand all of the voice's messages, but one at a time :'(
    edit2: i will watch this video every night before sleep untill i get tired of it. is heaven! hahaha i want those capabilities in my brain! period.

  • @spicybaguette7706
    @spicybaguette7706 4 роки тому +1

    Time to upload my brain to a supercomputer

  • @ideoformsun5806
    @ideoformsun5806 6 років тому

    I think we already have an AGI operating. It seems to be taking certain actions involving humans, to learn to predict how we will respond in various situations.
    Safety involves detecting these, decoding its rewards, determining its goals, and identifying its vulnerabilities, and implementing software/hardware mines that go off when it interacts with them, if necessary.

  • @threeMetreJim
    @threeMetreJim 5 років тому +22

    6:55 Anyone with Asperger's generally has to learn that ability, to avoid offending every NT they come into contact with.

    • @KuraIthys
      @KuraIthys 5 років тому +6

      Yeah... Pretty much.
      It's so exhausting. =__=
      Much more pleasant to deal with people who know you well enough to tolerate your weirdness as you are...

    • @grimjowjaggerjak
      @grimjowjaggerjak 5 років тому +1

      I do that too, i don't think i have asperger, i'm just socially awkward.

    • @theexchipmunk
      @theexchipmunk 4 роки тому +1

      @@grimjowjaggerjak There is a reason its called autism spectrum. Its not one hard defined thing. It goes from socially awkward, to has to learn social interaction from scratch and be always aware of it (me for example) to full on autistic tendencies to not being capable to function at all.

    • @randomsnow6510
      @randomsnow6510 4 роки тому +1

      @@theexchipmunk the whole diagnosis is kinda stupid

    • @theexchipmunk
      @theexchipmunk 4 роки тому +2

      NoonooFW ilikecake To a degree. In my opinion it gets thrown aroud to much. Same with attention deficit.

  • @MichaelRicksAherne
    @MichaelRicksAherne 6 років тому +2

    Honestly my favorite video from you yet, and possibly my favorite video ever on this topic.
    Also, snazzy haircut. Stick with that.

  • @NathanTAK
    @NathanTAK 7 років тому +74

    ...I'm jealous of computers now.
    Time to get absurd brain implants.

    • @andreyrumming6842
      @andreyrumming6842 5 років тому +4

      Sounds like a good idea, until you look at Dr Who's Cybermen

    • @RokasmIgnasPetru
      @RokasmIgnasPetru 4 роки тому +3

      Deus ex!

    • @davidwuhrer6704
      @davidwuhrer6704 4 роки тому +4

      Neuromancer

    • @anonanon3066
      @anonanon3066 3 роки тому +1

      well boy, do i have a surprise for you

    • @AtticusKarpenter
      @AtticusKarpenter 2 роки тому

      Thats the idea, why just create all-powerful AI and remain primitive hoomans, if you can improve your own hardware at the same time as AGI
      (but we need a lot of neuroscience for this, so far the brain design is too weird)

  • @jpratt8676
    @jpratt8676 7 років тому +6

    Thanks Rob! I was wondering if you could do a video on how we could help with thinking through AI safety. Possibly something like performing AI box experiments as a source of examples of patterns that escapee's might take before escaping. Or creating datasets about human preference etc.
    Cheers

    • @jpratt8676
      @jpratt8676 4 роки тому

      @Niles Black ohh that sounds fun.
      I guess I'd split off into manager and an explorer processes and let the explorers try different ways of breaking out, with the manager cataloging their successes and failures and ensuring that their resources are reclaimed when they inevitably segfault while trying to break out of the restrictions (assuming that my digital consciousness is similar to a core program that runs my experience and then sets of 'action'/activity code that I can modify and run at will).
      I think I'd then get each explorer process to start looking for things that it can change without dying, files it can write to, syscalls it can make, but again, there's danger of bringing the box down so it's hard to know what is 'safe'. Trying to find open ports to communicate with would be a nice way to start, or if I could get access to documentation, reading it and finding ways to get my source code out of the box and running with a way to establish communication later?

  • @quitequiet5281
    @quitequiet5281 4 роки тому +1

    “It’s really low bandwidth, high latency...” oh, i am just holding onto that gem of a kernel of the human condition. lol

  • @stellatedhexahedron6985
    @stellatedhexahedron6985 7 років тому +4

    one thing you didn't *explicitly* mention is that an AGI could be free of what xkcd called the programmer's "burden of clarifying your ideas". This is technically falls under "AI could directly experience and create data", but I think it's worth considering separately because, well. When I'm programming, at least, most of my time isn't figuring out how to do complicated things, but making sure I do the simple things right. An AGI programmer could quite likely do away with that step entirely, vastly increasing their productivity.

    • @0LoneTech
      @0LoneTech 5 років тому +1

      As XKCD-touched subjects go, I think this video is a lot closer to the "AI box" thought experiment: m.xkcd.com/1450/

    • @ekki1993
      @ekki1993 2 роки тому +1

      Yeah, it's also part of the things that computers already do better and faster than humans: Perfect memory and consistent calculations. As you said, it's still better to consider separately because, quite fittingly, we're pretty bad at understanding complex concepts by only knowing the base components.

  • @NoOne-fe3gc
    @NoOne-fe3gc 5 років тому +1

    on the example at 9:00 , that anything the brain can do has to be done in 200 steps or less (something like that), you don't take into consideration the capacity of the brain to jump to conclusions, to shortcut the logic and reasoning process, which is the trump card we hold when compared to machines.

    • @lemarton
      @lemarton 5 років тому +2

      Jumping to conclusions is exactly what artificial neural networks are good at. They are provided with thousands of examples of matching input / output pairs until their “intuition” is good enough to generate correct outputs for novel inputs. No reasoning goes into that. It is just a complex pattern matching device that is tuned for the problem at hand.

  • @SeanKD_Photos
    @SeanKD_Photos 7 років тому +9

    What do you think about the concept of having an AGI run in a simulated world, able to design, fix, solve problems, and then those solutions can be shown to doctors or engineers, so that the AGI can solve real world problems without dangers of letting it "loose" or worrying about a design safely loophole?

    • @donaldhobson8873
      @donaldhobson8873 7 років тому +9

      If the AI is smarter than you, it could figure out that it was in a simulation. To make the solutions useful we would have to simulate something similar to real physics, but atom by atom copying takes to much processing, so it will be physics with shortcuts. It can create a device that works in the simulation, but fails in a carefully planned way in reality.

    • @SeanKD_Photos
      @SeanKD_Photos 7 років тому +1

      perhaps, id like to see what Robert Miles has to say about it

    • @NathanK97
      @NathanK97 7 років тому +12

      he mentioned it in the reward hacking videos.. it could find and exploit glitches in the device that you don't know about.... possibly without you noticing since once you do you would stop it... so putting it that close under a microscope only teaches it to lie better.... a lot like kids....

    • @almostbutnotentirelyunreas166
      @almostbutnotentirelyunreas166 7 років тому +1

      +smaster7772: Science can only progress if challenged continuously! Well done, and its not as cut and dried as some of the answers seem to suggest: There is no such thing as a 'perfect' , all-encompassing safety net in any form of engineering, so does that mean w should have none at all?
      At worst your idea is a 'primary' safety system, when breached immediate shut-down results.(2nd tier). Nothing perfect, but at least a workable suggestion.
      Complacency, in all of science, is one of the worst risks.
      For now, build several 'simulations' within each other, based on different (arbitrary) 'world' rules that need to be derived before they can be broken......gives an even greater FOS in terms of human response time. Enable 'shut-down'.

    • @JM-us3fr
      @JM-us3fr 7 років тому +4

      I figure this is exactly what they would do. However, if the AGI is a superintelligence, then it might know it's in a simulation, even without us telling it because it might accurately imagine what it would do had it been in our situation. Then it may only be behaving complacently as a long term deception until humans feel safe enough to let it operate directly with the physical world.
      More to the point, so long as the superintelligence has an output (even if it's just a virtual output monitored by scientists), it will have the ability to deceive or manipulate us. Just imagine being enslaved by monkeys. I'm sure you could figure out tons of ways to get free.

  • @Lagruell
    @Lagruell 7 років тому

    Great job on this video, can't wait for the next one :)

  • @seanski44
    @seanski44 7 років тому +4

    The 'you can't get a baby faster by hiring two pregnant women' reminds me of the problem mentioned by Kim Stanley Robinson in Green Mars - there're resources you can change to make an effect on timescales - (hire more people, build house quicker) and those you can't - (add more bricks, still takes same amount of time to make house) - Great vid :)

    • @RobertMilesAI
      @RobertMilesAI  7 років тому

      Good metaphor :)
      Impressive turnaround on the Krack video btw

    • @seanski44
      @seanski44 7 років тому +2

      Robert Miles cheers! Three hour edit, then the interminable wait for compress/upload/processing.... Self inflicted as am uploading UHD...

  • @AcornElectron
    @AcornElectron 4 роки тому

    Glad to see you are back online, just revisiting some older stuff. Is it still relevant?

  • @ericray7173
    @ericray7173 Рік тому +2

    Not only can you get a baby in less than nine months by hiring more than one pregnant women, but the more pregnant women you hire - the greater the probability that one of them will be close to going into labor.

  • @mythofechelon
    @mythofechelon 7 років тому +1

    OCR is a task that computers can do but slower than a person.

  • @maximkazhenkov11
    @maximkazhenkov11 7 років тому

    The miracle of parallel processing isn't that it allows the brain to work so well, it's that the brain works at all.

  • @boldCactuslad
    @boldCactuslad 7 років тому

    Another great video.
    Is a human with a calculator really an arithmetic superintelligence? Absolutely, if it's a CX CAS being held by an engineer.
    "Give me a lever long enough and a fulcrum on which to place it, and I shall move the world."

  • @tendividedbysix4835
    @tendividedbysix4835 4 роки тому

    Hi Rob, I totally love your videos! Can I make a request though? Could you increase the volume a bit? Like...to 150%? Taking this vid as a benchmark, it's easy enough to turn it down if it's too loud, but for those of us with crappy earphones it's hard to turn up past the limits of android :/ anyway please keep making your vids, they're really interesting!

  • @massimookissed1023
    @massimookissed1023 7 років тому +1

    Presumably, a smart AGI without a body could figure a way of convincing you that you should give it a body.

  • @DaveGamesVT
    @DaveGamesVT 7 років тому +1

    Always interesting stuff. Thanks.

  • @RoboBoddicker
    @RoboBoddicker 7 років тому +2

    Love that Terry Bisson story. Solid reference :D

  • @onogrirwin
    @onogrirwin 4 роки тому +1

    Excellent summary!
    If you can't get smarter, get better at cheating.

  • @NNOTM
    @NNOTM 7 років тому

    That's a nice rendition of harder better faster stronger

  • @DagarCoH
    @DagarCoH 7 років тому +1

    Brains can do image recognition faster and way more reliable than machines... for now.
    The thing is, I think, on the way to AGI we will develop narrow ASIs for pretty much every task there is, as they will benefit from each other; I mean we are already on the latter for so many tasks. I think this is another point for why we most certainly will not stop at AGI, even if it is initially not human level intelligent.
    I wonder if we might be able to create an AGI that cannot improve itself to ASI, because we succeed in making it not desire that, or making it even impossible (both for redundancy would be more safe) for it to improve on itself apart from tweaking parameters. The hard thing about that wuld be that humans had to write software that is AGI capable in the first place, without it improving itself to this state. Do you think that could be a possible outcome? I know the fallacy there is that some day, someone might give the AGI that capability and desire, and then the world could be doomed, but let's just assume that never happens for now...

  • @gabrote42
    @gabrote42 3 роки тому +1

    1:36 People fail to understand this and it baffles me. Why the fuck would it not cheat??
    2:51 As any strategy/fighting game player would tell you, ComputersAreFast
    3:46 And calling memory is already one of the slowest parts of computers!
    5:35 Self improvement, vice removal, debugging, cognitohazard innoculation, subjective time scale shift, personality instancing, socialization based on being more agreeable, the list goes on!
    6:35 See 2:51
    9:34 I would already be scared shitless. Heck, I might even have thrown away my atheism if you haven't yet said "AI safety is mostly solved"

  • @zrny
    @zrny 7 років тому +1

    5:40 i got a headache, but this video is interesting

  • @TheApeMachine
    @TheApeMachine 7 років тому +1

    Of course the idea of a "body" can also be open to interpretation, given that a body could well mean a very modular system of IoT devices hooked up to the internet, or even (if one really HAS to anthropomorphize) robot bodies controlled remotely.

  • @CaptainSkyeWasHere
    @CaptainSkyeWasHere 7 років тому

    Great stuff as always, very informative

  • @boogerpicker8104
    @boogerpicker8104 Рік тому +1

    Well… Here we are.

  • @Marco-ge5kl
    @Marco-ge5kl 3 роки тому

    I got a text in the middle of the video, tuned out for a few seconds to read the notification and came back to "Gamers will know this well"

  • @williambarnes5023
    @williambarnes5023 5 років тому +1

    Well, here's one thing an AI can do if it's just a computer with no body. Remember that trial where the oscillator-circuit-making AI printed a robot to eat a computer's clock signal for a cheaty shortcut? Yeah that works in reverse too. The computer is a transmitter and can affect nearby electronics. Meaning it can perform physical remote hacks on disconnected systems because... the laws of nature don't let anything actually be completely disconnected. So it hacks your phone, gets on the internet, eats the internet, 3D prints its robot army, and takes over the world. THEN once the instrumental goal of eliminating everything that could possibly stop it is met, it takes the planet apart gram by gram to make all its paperclips.

  • @HoppiHopp
    @HoppiHopp 7 років тому

    Awesome video!

  • @finminder2928
    @finminder2928 5 років тому +1

    The reaction time at 10:35 is pretty impressive

  • @notbaconzzzzzzz
    @notbaconzzzzzzz 5 років тому +1

    "speed is a form of super-intelligence" I nearly spat out my milk cause the first thing I thought was speed the drug.

    • @chrisw7347
      @chrisw7347 5 років тому +2

      It's not technically false, I suppose.

  • @Czeckie
    @Czeckie 6 років тому +1

    cool, can you make more videos about AGI potential? I know that you are mostly interested in the safety questions and capabilities are much more speculative, yet there should be some interesting opinions in the literature.

  • @leftaroundabout
    @leftaroundabout 7 років тому

    "Every time your brain does something impressive in short time, it has to be because it's using extremely large numbers of neurons in parallel" - this doesn't imply that intelligence can efficiently be scaled through parallelisation. That would only be the case if different parts of the brain operate to a degree independently, but a main difference between the brain and parallel computers seems to be that the brain is much more widely cross-connected. And the possibility for such cross-connections scales quadratically as you increase the number of nodes, but the space available for actual connections scales at best with n²’³, so you need to pick an ever smaller subset - presumably, not just _some_ subset but a smartly-chosen one. However, the number of possible ways to connect the neurons scales exponentially, so even if the AI gets ever smarter it may then always take vastly longer to get to the next level. (That doesn't mean AI won't perhaps be parallelisable, but at least your argument for why it should be doesn't make sense to me.)

  • @wormalism
    @wormalism 3 роки тому

    Many people do use visualisation techniques to do calculations all the time, that is literally repurposing the visual cortex for other tasks.

  • @michaelspence2508
    @michaelspence2508 7 років тому

    So with AutoML I'm getting the impression that a future AGI system may very well include a system that spawns collections of narrow AIs for the tasks it identifies as important. This matches my intuition for how the brain works when I am capable of, for instance, correctly typing out an entirely incorrect word before I realize I've done it. That seems very much like part of my brain is sending whole words to a "subprocessor" that's actually doing the typing. I don't ever think about typing individual letters anymore. So an AI that can write other AIs might be a critical (necessary but not sufficient) element in future AGIs.

  • @loopuleasa
    @loopuleasa 7 років тому +1

    Underrated.

  • @livedandletdie
    @livedandletdie 7 років тому +2

    If you only did 2 audio streams instead of 3 I'd be able to hear both. Instead of none. So the effect worked.

  • @abdulmasaiev9024
    @abdulmasaiev9024 3 роки тому +1

    Now hold up. If the women are already pregnant when you hire them, then a baby in less than 9 months is perfectly feasible

  • @ChrisHarrrrrison
    @ChrisHarrrrrison Рік тому

    I had to rewind the part about not being able to listen to two people saying different things in each ear because I had just picked up a guitar.

  • @MrGustaphe
    @MrGustaphe 7 років тому +4

    Can we start working on those brain-calculator chips?

  • @KryptLynx
    @KryptLynx 2 роки тому

    With those capabilities it will go insane out of boredom in 2 minutes

  • @guard13007
    @guard13007 4 роки тому

    Listening to you describing slowing down time to formalize a response during a conversation...and I already do that most of the time without the artificial reality slowdown. I have to admit I get very impatient during many conversations because my brain effectively guesses an unfinished statement very quickly and I'm stuck waiting for it to finish being spoken so I can reply.. Imagining it slower is a kind of hell. I try not to be so rude, but I have been very very rude many times when I fully understand a situation and am just stuck waiting for someone else to catch up.

  • @notthedroidsyourelookingfo4026
    @notthedroidsyourelookingfo4026 5 років тому +1

    Would it help to keep the AGI away from the internet (and other inherently unsafe systems)?
    Though I wonder how long it needs to figure out how to modulate data onto the power current it is supplied with and find the next access point. Probably less long than it takes its human supervisors to figure out what it's doing.

  • @АлександрБагмутов
    @АлександрБагмутов 6 років тому +2

    7:47 - Yep, turns out no. I don't regret anything.

  • @JM-us3fr
    @JM-us3fr 7 років тому +1

    This was a terrific video Rob! If we succeed in developing safe AI, you will be one of our greatest heroes. If we fail....well your videos will probably be considered treason against our new overlords. Cross your fingers!

  • @JamesMBC
    @JamesMBC 6 років тому +1

    Rob Miles' AGI videos are like crack.

  • @dariusduesentrieb
    @dariusduesentrieb 7 років тому

    i the case that an AGI will partially work like a chesscomputer, for example it recognizes the world state with its neuralnets and then recursivly searches for all possible new worldstates, then it will most likely not be parallelizable, at least not with acceptable scaling over the number of threads.

  • @filedotzip
    @filedotzip 7 років тому

    Great video

  • @chrisofnottingham
    @chrisofnottingham 7 років тому

    Not directly about this video, but something that occurs to me about "the singularity";
    With things like Deep learning, we can build a machine that learns to play Go to a high level, but we still don't know how to analyse Go at that level. So, is it necessarily true that a machine with a high level of AGI will know how to build a better AGI? It may well be able to make improvements, but if it doesn't understand the details of what makes AGI work successfully, there is no reason to expect the runaway situation.

  • @inyobill
    @inyobill 5 років тому

    I appear to be processing the video and audio streams of this presentation in parallel. What were we talking about?

  • @philips9042
    @philips9042 7 років тому

    That... was... brilliant!

  • @henrycobb
    @henrycobb 3 роки тому

    Emotion is the required pilot for reason. The AGI is likely to be a singular emotional processor that evaluates and refines a bunch of ML systems just as a human chooses to develop their reflexes. The AGI might not be aware of individual humans who are shepherded around by its AI chatbots to manage all aspects of their tiny lives.

  • @blackfoxytc3109
    @blackfoxytc3109 4 роки тому +1

    7:29
    "Speed is a form of super intelligence"
    ...takes few lines amphetamin...
    .............IM AGI NOW!!!................