Being as good as the best human at every task is kind of superintelligent in itself. It's like the best scientists and engineers, but in every field. It doesn't have to talk to specialists. It doesn't have to buy anything, because it can make those things, and better. It probably wouldn't have to wait a year until the better chips come out.
@Пётр Бойков yes, but it's possible that superhuman depth of intelligence would emerge from superhuman breadth of intelligence. "Breadth of intelligence" is not a *perfect* way to analyze the function of a corporation, it's just the best one available if you're trying to force the AGI comparison. A team of lots of humans don't synthesize information as efficiently as a single person would if they had all the skills themselves. A team of humans cooperates at the bumbling speed of natural language whereas an AGI combines all of its competencies at, essentially, the speed of light. Additionally, for obvious reasons, no corporation has ever tried collecting every expert in every field just to see what might happen if they all get in the same room together. We have no idea what might emerge from that synthesis.
I am happy now that I know I am a superhuman when I hold a calculator Unfortunately, I'm in the presence of gods when other humans run around with mobile phones
Laurie O you should then be glad that the mortality rate of people holding calculators is lower than the mortality rate of people holding mobile phones.
Smartphones have made us all demigods. But when everyone's a demigod- who cares. Worship me for i can summon the entirety of human knowledge from anywhere- oh wait so can everyone. Still useful, but less fun.
@@sk8rdman "Idiots" suggests that the stupid are the most likely to be armed with the most powerful technology. But this isn't how natural selection works in human social hierarchies. Those who will wield god-like tools will be the most successfully psychopathic(charming, domineering, callous, manipulative, deceptive, self-absorbed, etc). The future is essentially like real life representations of the god of the Old Testament running around and manipulating the world to suit their needs with technology indistinguishable from magic. You won't know what hit you in the same way a Roman peasant didn't know they were being stupefied by lead saturated water from the Roman aqueduct while the rulers schemed for conquest.
@@inthefade Still. Might continue until AGI, either because alignment is solved and can explain itself better, or because Robert has been Roko's Basilisked.
"The software developer that can percieve data directly without converting to symbols without visually reading it. And is about as smart as the smartest developers." Basically an Assembly programmer in a nutshell..
-looks up from writing assembly on an old 8 bit microcomputer¬ hmmh? Did someone say something? Eh. Probably not important. ~goes back to pointless nostalgia coding-
Another thing is that AGI will probably communicate at eventually many gigabytes per second, the equivalent of reciting the entire english Wikipedia to your friend in less than a minute. AGI won't have to deal with many languages each with arbitrary rules and meanings being lost in ambiguous terminology and translation errors. To solve hard problems, humans always cluster together in small teams, and structures of multiple teams, all the way to the community that communicates with research papers taking months to publish. Imagine a thousand Einstein level AGI working on physics problems together in perfect instantaneous communication.
My guess is, if you have two AGI, and they are decide to cooperate, you essentially get one AGI with double the brainpower. That's how efficiently they could communicate.
Working as a group has complications besides bandwidth. Each member sees a different part of the same problem (otherwise we're just adding redundancy), so they will all come up with different solutions, too. At the very least you need a mechanism for consensus, and what tells us this won't be just as messy as it is for humans? We have not solved collective decision making at all, in fact we are hoping for AGI to help us with that. How many scientific breakthroughs have been made by a large group of scientists, and how many were made by a single visionary? The answer should make you think.
The AGIs would not discriminate between information "they" found, or that "someone else found". They wouldn't really have biases the way humans do. Thus, if information is shared freely amongst the AIs, at some point they will all collectively have enough of the information to agree. They can just continuously share information with one another until agreement is found. This could slow it down, but it won't break it.
Strictly, a dump of the entirety of Wikipedia- including all the history, which is relatively important to the whole shebang- is 10 TB (and I don't know if that even includes images, or whether they matter); to recite 10 TiB in 1 minute, you'd need to communicate at 170 GiB/s (> several)
There's also a very significant point that perfect memory is essentially the same as super intelligence. An AGI that can spend a few hours/days reading all open source code in existence, and then it'll start to update itself, will do so with perfect recall of all code to ever exist. Which means it will never really make mistakes.
There's an old short story by Stanislaw Lem entitled "Trurl's Electronic Bard" where Trurl builds, as you may have guessed, an electronic bard. It was so good at creating poems that, by playing them, it could incapacitate anyone with the overwhelming feelings they caused. They decided to dismantle it, but any technician approacing the machine would be brought to tears by a few sad ballads. So they sent deaf technicians, and the machine used... pantomime. The story ends just as they planned to use bombs to blow the thing up from a distance, but somebody from another planet came, bought the machine and brought to their home instead. The moral here is that a superintelligent machine wouldn't necessarily need a physical "interface" to make harm.
en.wikipedia.org/wiki/Pantomime Mime is not actually an abbreviation of pantomime, though they are etymologically connected. Pantomime is a type of comedic stage production in the UK. One of the staples of pantomime is the call and response, often "oh yes it is" - "oh no it isn't". First pantomime example I found: watch?v=adb3Sfo__nE
@@Kishmond not generalized no, but for certain trained data sets yes. Computers cannot recognize generalized images as well as humans. But if they are trained specifically to recognize specific things, they can do it as well as humans and better. And in THESE cases they are much faster than humans at doing it too.
Outstanding explainer which has lasted very well, it effectively describes exactly how open AI managed to succeed by throwing more computation at the problem. and you pointed in that direction several years before they did.
What the hell, I've been subscribed to you (and computerphile) forever and not once have I seen you appear in my subscriptions feed over the last 2 months. I just found this in the recommended feed and still couldn't see it in the subs feed. :(
Since May of this year, I've been teaching improvisational theater to a group of 28 senior citizens in my 55+ community in San Marcos, California. The goal is to collectively create a 2-act play that will be performed in mid-March. Any member of the community was welcome to join the class, regardless of age, theatrical training or experience. As a result, my students range from 55 - 91 in age, and only a few have had any sort of theater classes or stage experience, with the exception of some of the dancers in the group. It's been fascinating watching them learn. The key to getting them to open up to the idea that they might be able to improvise on stage was to convince them that they improvise as a matter of course in everyday life. For the most part, they have exceeded my wildest hopes in unlocking talents and skills even they had no idea they possessed. Only a few are still struggling, because their brains can't seem to "bend" enough. What enables the majority to successfully improvise dialogue and movement is, aside from mental flexibility, lifetimes of experiences - not the least of which are emotional in nature - that have honed their ability to empathize. Those with minimal capacity to empathize simply can't convincingly improvise. And sometimes, as any SNL aficionado knows, an improvisation simply falls flat, regardless of the talents, skills, training and experience of the performers. While watching this video, I was struck with the notion that improvisation might be the key test of success for a true AI. It's not processing power or speed, both of which are constantly evolving commodities in the computer world, and as you point out, in theory, there is no theoretical barrier to "parallelizing" processors in AI development. But how can an AI learn to empathize with human emotions and feelings, without the capacity to experience emotions? I don't think simulated feelings would lead to true empathy, and if I'm right, an AI-controlled machine, however human-like in every other way, will not be capable of convincing improvisation. If that's true, then AI-controlled machines will continually "get it wrong" in interacting with human machines, and that means they will accidentally harm human beings, even if they consciously attempt to obey Isaac Asimov's 3 laws of robotics.
This is an amazing video! I love watching the quality continually progress. Please do not take down your old videos or delete them from the world. It's such an amazing progression. *keep it up!*
6:05 This was a really powerful ability that the MC in the Japanese light novel 'So I'm a Spider, So What?' received. She had a brain do planning, another to control her body, another to do the processing required to cast magic spells, etc. 6:40 Accel World in a nutshell
Another one to add would be "Chrysalis", a novel where the protagonist is an ant and "evolves" more brains in order to split up the heavy mental workload of casting magic. He also ends up making a colony of extremely industious, highly intellingent, and highly cooperative ants, which we all know will obtain global domination sooner or later. Its pretty fun and fascinating to read.
When I felt that the two audios were coming I paused the video, closed my eyes and play it, and could understand what 2/3 of you said... but then immediatly you said the thing about closing your eyes. The player has been played. Great video btw
I once responded to two questions which were asked simultaneously, one in each ear. My brain managed to make sense out of both questions and answer each party. I don't believe that I am unique in this.
5.44 it may have been really hard to do but it worked really well and I watched it several times to hear all the bits then again to pause and read this note that was up for less than a second. Nice one :)
Aaaaah. I needed my AI fix. It's been far too long since the last one... No pressure Rob, I'm just a poor junky, because this topic is sooooo f***king interesting.
Also; AGI will be able to share experiences with each other. Meaning; if 1 AGI learns how to do a task, all AGI could potentially have learned to do that task. That AND because it is faster, assuming it is self modifying; it can easily re-write it's code again and again millions of times over before we have finished our first cup of coffee meaning that if it starts out as smart as humans; it won't stay that way for long.
'expirence the data directly' is an interesting concept. I want to argue that the slow, meat based, hunter gatherer clunkiness of our system is what makes space for our cognition. A calculator is directly processing button inputs with arithmetical precision, but I still think you and I have a better grasp on what the numbers mean.
The whole "being lead to stuff by our instincts thing" is a very VERY interesting thought. It has massive implications for intelligence, culture and technology. It is for example one of the possible solutions for the frame paradox.
such a fucking genius, never cease to amaze me, i know you have a lot on your plate right now, but would love to keep hearing feedback from you about current events.
did you go your own way due to you popularity at the time of computerphile ? glad to see you doing your own stuff, really appreciated your talks on computerphile. new sub.
when you started to talk about a calculator in the brain to pop in answers inmediately or perceiving code as a sensation and a feeling rather than text, or writting programs with the speed of thought. my dopamine leves reached ecstasy, that is beyond heaven pleasure levels ahhaha edit: 5:34 i had to view that 3 times! and i was able to understand all of the voice's messages, but one at a time :'( edit2: i will watch this video every night before sleep untill i get tired of it. is heaven! hahaha i want those capabilities in my brain! period.
I think we already have an AGI operating. It seems to be taking certain actions involving humans, to learn to predict how we will respond in various situations. Safety involves detecting these, decoding its rewards, determining its goals, and identifying its vulnerabilities, and implementing software/hardware mines that go off when it interacts with them, if necessary.
@@grimjowjaggerjak There is a reason its called autism spectrum. Its not one hard defined thing. It goes from socially awkward, to has to learn social interaction from scratch and be always aware of it (me for example) to full on autistic tendencies to not being capable to function at all.
Thats the idea, why just create all-powerful AI and remain primitive hoomans, if you can improve your own hardware at the same time as AGI (but we need a lot of neuroscience for this, so far the brain design is too weird)
Thanks Rob! I was wondering if you could do a video on how we could help with thinking through AI safety. Possibly something like performing AI box experiments as a source of examples of patterns that escapee's might take before escaping. Or creating datasets about human preference etc. Cheers
@Niles Black ohh that sounds fun. I guess I'd split off into manager and an explorer processes and let the explorers try different ways of breaking out, with the manager cataloging their successes and failures and ensuring that their resources are reclaimed when they inevitably segfault while trying to break out of the restrictions (assuming that my digital consciousness is similar to a core program that runs my experience and then sets of 'action'/activity code that I can modify and run at will). I think I'd then get each explorer process to start looking for things that it can change without dying, files it can write to, syscalls it can make, but again, there's danger of bringing the box down so it's hard to know what is 'safe'. Trying to find open ports to communicate with would be a nice way to start, or if I could get access to documentation, reading it and finding ways to get my source code out of the box and running with a way to establish communication later?
one thing you didn't *explicitly* mention is that an AGI could be free of what xkcd called the programmer's "burden of clarifying your ideas". This is technically falls under "AI could directly experience and create data", but I think it's worth considering separately because, well. When I'm programming, at least, most of my time isn't figuring out how to do complicated things, but making sure I do the simple things right. An AGI programmer could quite likely do away with that step entirely, vastly increasing their productivity.
Yeah, it's also part of the things that computers already do better and faster than humans: Perfect memory and consistent calculations. As you said, it's still better to consider separately because, quite fittingly, we're pretty bad at understanding complex concepts by only knowing the base components.
on the example at 9:00 , that anything the brain can do has to be done in 200 steps or less (something like that), you don't take into consideration the capacity of the brain to jump to conclusions, to shortcut the logic and reasoning process, which is the trump card we hold when compared to machines.
Jumping to conclusions is exactly what artificial neural networks are good at. They are provided with thousands of examples of matching input / output pairs until their “intuition” is good enough to generate correct outputs for novel inputs. No reasoning goes into that. It is just a complex pattern matching device that is tuned for the problem at hand.
What do you think about the concept of having an AGI run in a simulated world, able to design, fix, solve problems, and then those solutions can be shown to doctors or engineers, so that the AGI can solve real world problems without dangers of letting it "loose" or worrying about a design safely loophole?
If the AI is smarter than you, it could figure out that it was in a simulation. To make the solutions useful we would have to simulate something similar to real physics, but atom by atom copying takes to much processing, so it will be physics with shortcuts. It can create a device that works in the simulation, but fails in a carefully planned way in reality.
he mentioned it in the reward hacking videos.. it could find and exploit glitches in the device that you don't know about.... possibly without you noticing since once you do you would stop it... so putting it that close under a microscope only teaches it to lie better.... a lot like kids....
+smaster7772: Science can only progress if challenged continuously! Well done, and its not as cut and dried as some of the answers seem to suggest: There is no such thing as a 'perfect' , all-encompassing safety net in any form of engineering, so does that mean w should have none at all? At worst your idea is a 'primary' safety system, when breached immediate shut-down results.(2nd tier). Nothing perfect, but at least a workable suggestion. Complacency, in all of science, is one of the worst risks. For now, build several 'simulations' within each other, based on different (arbitrary) 'world' rules that need to be derived before they can be broken......gives an even greater FOS in terms of human response time. Enable 'shut-down'.
I figure this is exactly what they would do. However, if the AGI is a superintelligence, then it might know it's in a simulation, even without us telling it because it might accurately imagine what it would do had it been in our situation. Then it may only be behaving complacently as a long term deception until humans feel safe enough to let it operate directly with the physical world. More to the point, so long as the superintelligence has an output (even if it's just a virtual output monitored by scientists), it will have the ability to deceive or manipulate us. Just imagine being enslaved by monkeys. I'm sure you could figure out tons of ways to get free.
The 'you can't get a baby faster by hiring two pregnant women' reminds me of the problem mentioned by Kim Stanley Robinson in Green Mars - there're resources you can change to make an effect on timescales - (hire more people, build house quicker) and those you can't - (add more bricks, still takes same amount of time to make house) - Great vid :)
Not only can you get a baby in less than nine months by hiring more than one pregnant women, but the more pregnant women you hire - the greater the probability that one of them will be close to going into labor.
Another great video. Is a human with a calculator really an arithmetic superintelligence? Absolutely, if it's a CX CAS being held by an engineer. "Give me a lever long enough and a fulcrum on which to place it, and I shall move the world."
Hi Rob, I totally love your videos! Can I make a request though? Could you increase the volume a bit? Like...to 150%? Taking this vid as a benchmark, it's easy enough to turn it down if it's too loud, but for those of us with crappy earphones it's hard to turn up past the limits of android :/ anyway please keep making your vids, they're really interesting!
Brains can do image recognition faster and way more reliable than machines... for now. The thing is, I think, on the way to AGI we will develop narrow ASIs for pretty much every task there is, as they will benefit from each other; I mean we are already on the latter for so many tasks. I think this is another point for why we most certainly will not stop at AGI, even if it is initially not human level intelligent. I wonder if we might be able to create an AGI that cannot improve itself to ASI, because we succeed in making it not desire that, or making it even impossible (both for redundancy would be more safe) for it to improve on itself apart from tweaking parameters. The hard thing about that wuld be that humans had to write software that is AGI capable in the first place, without it improving itself to this state. Do you think that could be a possible outcome? I know the fallacy there is that some day, someone might give the AGI that capability and desire, and then the world could be doomed, but let's just assume that never happens for now...
1:36 People fail to understand this and it baffles me. Why the fuck would it not cheat?? 2:51 As any strategy/fighting game player would tell you, ComputersAreFast 3:46 And calling memory is already one of the slowest parts of computers! 5:35 Self improvement, vice removal, debugging, cognitohazard innoculation, subjective time scale shift, personality instancing, socialization based on being more agreeable, the list goes on! 6:35 See 2:51 9:34 I would already be scared shitless. Heck, I might even have thrown away my atheism if you haven't yet said "AI safety is mostly solved"
Of course the idea of a "body" can also be open to interpretation, given that a body could well mean a very modular system of IoT devices hooked up to the internet, or even (if one really HAS to anthropomorphize) robot bodies controlled remotely.
Well, here's one thing an AI can do if it's just a computer with no body. Remember that trial where the oscillator-circuit-making AI printed a robot to eat a computer's clock signal for a cheaty shortcut? Yeah that works in reverse too. The computer is a transmitter and can affect nearby electronics. Meaning it can perform physical remote hacks on disconnected systems because... the laws of nature don't let anything actually be completely disconnected. So it hacks your phone, gets on the internet, eats the internet, 3D prints its robot army, and takes over the world. THEN once the instrumental goal of eliminating everything that could possibly stop it is met, it takes the planet apart gram by gram to make all its paperclips.
cool, can you make more videos about AGI potential? I know that you are mostly interested in the safety questions and capabilities are much more speculative, yet there should be some interesting opinions in the literature.
"Every time your brain does something impressive in short time, it has to be because it's using extremely large numbers of neurons in parallel" - this doesn't imply that intelligence can efficiently be scaled through parallelisation. That would only be the case if different parts of the brain operate to a degree independently, but a main difference between the brain and parallel computers seems to be that the brain is much more widely cross-connected. And the possibility for such cross-connections scales quadratically as you increase the number of nodes, but the space available for actual connections scales at best with n²’³, so you need to pick an ever smaller subset - presumably, not just _some_ subset but a smartly-chosen one. However, the number of possible ways to connect the neurons scales exponentially, so even if the AI gets ever smarter it may then always take vastly longer to get to the next level. (That doesn't mean AI won't perhaps be parallelisable, but at least your argument for why it should be doesn't make sense to me.)
So with AutoML I'm getting the impression that a future AGI system may very well include a system that spawns collections of narrow AIs for the tasks it identifies as important. This matches my intuition for how the brain works when I am capable of, for instance, correctly typing out an entirely incorrect word before I realize I've done it. That seems very much like part of my brain is sending whole words to a "subprocessor" that's actually doing the typing. I don't ever think about typing individual letters anymore. So an AI that can write other AIs might be a critical (necessary but not sufficient) element in future AGIs.
Listening to you describing slowing down time to formalize a response during a conversation...and I already do that most of the time without the artificial reality slowdown. I have to admit I get very impatient during many conversations because my brain effectively guesses an unfinished statement very quickly and I'm stuck waiting for it to finish being spoken so I can reply.. Imagining it slower is a kind of hell. I try not to be so rude, but I have been very very rude many times when I fully understand a situation and am just stuck waiting for someone else to catch up.
Would it help to keep the AGI away from the internet (and other inherently unsafe systems)? Though I wonder how long it needs to figure out how to modulate data onto the power current it is supplied with and find the next access point. Probably less long than it takes its human supervisors to figure out what it's doing.
This was a terrific video Rob! If we succeed in developing safe AI, you will be one of our greatest heroes. If we fail....well your videos will probably be considered treason against our new overlords. Cross your fingers!
i the case that an AGI will partially work like a chesscomputer, for example it recognizes the world state with its neuralnets and then recursivly searches for all possible new worldstates, then it will most likely not be parallelizable, at least not with acceptable scaling over the number of threads.
Not directly about this video, but something that occurs to me about "the singularity"; With things like Deep learning, we can build a machine that learns to play Go to a high level, but we still don't know how to analyse Go at that level. So, is it necessarily true that a machine with a high level of AGI will know how to build a better AGI? It may well be able to make improvements, but if it doesn't understand the details of what makes AGI work successfully, there is no reason to expect the runaway situation.
Emotion is the required pilot for reason. The AGI is likely to be a singular emotional processor that evaluates and refines a bunch of ML systems just as a human chooses to develop their reflexes. The AGI might not be aware of individual humans who are shepherded around by its AI chatbots to manage all aspects of their tiny lives.
Being as good as the best human at every task is kind of superintelligent in itself. It's like the best scientists and engineers, but in every field. It doesn't have to talk to specialists. It doesn't have to buy anything, because it can make those things, and better. It probably wouldn't have to wait a year until the better chips come out.
@Пётр Бойков yes, but it's possible that superhuman depth of intelligence would emerge from superhuman breadth of intelligence. "Breadth of intelligence" is not a *perfect* way to analyze the function of a corporation, it's just the best one available if you're trying to force the AGI comparison. A team of lots of humans don't synthesize information as efficiently as a single person would if they had all the skills themselves. A team of humans cooperates at the bumbling speed of natural language whereas an AGI combines all of its competencies at, essentially, the speed of light. Additionally, for obvious reasons, no corporation has ever tried collecting every expert in every field just to see what might happen if they all get in the same room together. We have no idea what might emerge from that synthesis.
I am happy now that I know I am a superhuman when I hold a calculator
Unfortunately, I'm in the presence of gods when other humans run around with mobile phones
Laurie O you should then be glad that the mortality rate of people holding calculators is lower than the mortality rate of people holding mobile phones.
More like idiots with god-like tools.
Smartphones have made us all demigods. But when everyone's a demigod- who cares.
Worship me for i can summon the entirety of human knowledge from anywhere- oh wait so can everyone. Still useful, but less fun.
When everyone's super, no one is.
@@sk8rdman "Idiots" suggests that the stupid are the most likely to be armed with the most powerful technology. But this isn't how natural selection works in human social hierarchies. Those who will wield god-like tools will be the most successfully psychopathic(charming, domineering, callous, manipulative, deceptive, self-absorbed, etc). The future is essentially like real life representations of the god of the Old Testament running around and manipulating the world to suit their needs with technology indistinguishable from magic. You won't know what hit you in the same way a Roman peasant didn't know they were being stupefied by lead saturated water from the Roman aqueduct while the rulers schemed for conquest.
This is THE best channel on YT right now covering AGI topics.
Still the best 2+ years later :]
@@bing0bongo Still 1 month later
@@phisicoloco still some time later
And still...
@@inthefade Still.
Might continue until AGI, either because alignment is solved and can explain itself better, or because Robert has been Roko's Basilisked.
thumbs up just for the 'general intelligence has to be parallelizable, because the human mind has to be'
I love the Ukelele "Harder better faster stronger" :p
Please release all your crazy Uke' ditties at some point ^.^
Great video as ever!
It fits the topic so well!
"The software developer that can percieve data directly without converting to symbols without visually reading it. And is about as smart as the smartest developers."
Basically an Assembly programmer in a nutshell..
Connor Keenum and we know they aren’t real humans, looks like we already have AGI
@@huckthatdish Exactly. No Human can learn Assembly, it's obviously too hard. lol
# WakeupSheeple
-looks up from writing assembly on an old 8 bit microcomputer¬
hmmh? Did someone say something?
Eh. Probably not important. ~goes back to pointless nostalgia coding-
@@Nellak2011 Strangely, I only remember "fever dreams" of the time when I was allegedly taught assembly in university. It's definitely aliens.
Assembly uses lots more symbols to represent simple concepts than high level languages.
Another thing is that AGI will probably communicate at eventually many gigabytes per second, the equivalent of reciting the entire english Wikipedia to your friend in less than a minute. AGI won't have to deal with many languages each with arbitrary rules and meanings being lost in ambiguous terminology and translation errors. To solve hard problems, humans always cluster together in small teams, and structures of multiple teams, all the way to the community that communicates with research papers taking months to publish. Imagine a thousand Einstein level AGI working on physics problems together in perfect instantaneous communication.
Imagine the Manhattan project with a thousand Einstein level AGI working on it together in perfect instantaneous communication.
My guess is, if you have two AGI, and they are decide to cooperate, you essentially get one AGI with double the brainpower. That's how efficiently they could communicate.
Working as a group has complications besides bandwidth. Each member sees a different part of the same problem (otherwise we're just adding redundancy), so they will all come up with different solutions, too. At the very least you need a mechanism for consensus, and what tells us this won't be just as messy as it is for humans? We have not solved collective decision making at all, in fact we are hoping for AGI to help us with that.
How many scientific breakthroughs have been made by a large group of scientists, and how many were made by a single visionary? The answer should make you think.
The AGIs would not discriminate between information "they" found, or that "someone else found". They wouldn't really have biases the way humans do. Thus, if information is shared freely amongst the AIs, at some point they will all collectively have enough of the information to agree. They can just continuously share information with one another until agreement is found. This could slow it down, but it won't break it.
Strictly, a dump of the entirety of Wikipedia- including all the history, which is relatively important to the whole shebang- is 10 TB (and I don't know if that even includes images, or whether they matter); to recite 10 TiB in 1 minute, you'd need to communicate at 170 GiB/s (> several)
There's also a very significant point that perfect memory is essentially the same as super intelligence. An AGI that can spend a few hours/days reading all open source code in existence, and then it'll start to update itself, will do so with perfect recall of all code to ever exist. Which means it will never really make mistakes.
*Clapping intensifies* Thanks Rob, great video
*Clap*
... keep it up?
I like your humor.
There's an old short story by Stanislaw Lem entitled "Trurl's Electronic Bard" where Trurl builds, as you may have guessed, an electronic bard. It was so good at creating poems that, by playing them, it could incapacitate anyone with the overwhelming feelings they caused. They decided to dismantle it, but any technician approacing the machine would be brought to tears by a few sad ballads. So they sent deaf technicians, and the machine used... pantomime.
The story ends just as they planned to use bombs to blow the thing up from a distance, but somebody from another planet came, bought the machine and brought to their home instead.
The moral here is that a superintelligent machine wouldn't necessarily need a physical "interface" to make harm.
togusa75 "the machine used pantomime"
Oh no it didn't!
what do you mean?
en.wikipedia.org/wiki/Pantomime
Mime is not actually an abbreviation of pantomime, though they are etymologically connected. Pantomime is a type of comedic stage production in the UK. One of the staples of pantomime is the call and response, often "oh yes it is" - "oh no it isn't". First pantomime example I found: watch?v=adb3Sfo__nE
Too bad you stopped using this channel. The world needs you.
Imagine living quarantine, but in super slow motion because you can think in 10x....
It's time to overclock the meat
The flame that burns twice as bright burns half as long.
6:17 Looking at an image and figuring out if it has a traffic light in it or not. Got 'em.
That's what I thought too, but are computers as good as humans at image recognition? I don't think they are yet.
I said driving a car... but I guess at a base level, it's the same thing.
@@Kishmond they can be. like most ai it's very narrow but the point is its been done
@@Kishmond not generalized no, but for certain trained data sets yes.
Computers cannot recognize generalized images as well as humans. But if they are trained specifically to recognize specific things, they can do it as well as humans and better. And in THESE cases they are much faster than humans at doing it too.
Outstanding explainer which has lasted very well, it effectively describes exactly how open AI managed to succeed by throwing more computation at the problem. and you pointed in that direction several years before they did.
What the hell, I've been subscribed to you (and computerphile) forever and not once have I seen you appear in my subscriptions feed over the last 2 months. I just found this in the recommended feed and still couldn't see it in the subs feed. :(
Since May of this year, I've been teaching improvisational theater to a group of 28 senior citizens in my 55+ community in San Marcos, California. The goal is to collectively create a 2-act play that will be performed in mid-March. Any member of the community was welcome to join the class, regardless of age, theatrical training or experience. As a result, my students range from 55 - 91 in age, and only a few have had any sort of theater classes or stage experience, with the exception of some of the dancers in the group. It's been fascinating watching them learn. The key to getting them to open up to the idea that they might be able to improvise on stage was to convince them that they improvise as a matter of course in everyday life. For the most part, they have exceeded my wildest hopes in unlocking talents and skills even they had no idea they possessed. Only a few are still struggling, because their brains can't seem to "bend" enough.
What enables the majority to successfully improvise dialogue and movement is, aside from mental flexibility, lifetimes of experiences - not the least of which are emotional in nature - that have honed their ability to empathize. Those with minimal capacity to empathize simply can't convincingly improvise. And sometimes, as any SNL aficionado knows, an improvisation simply falls flat, regardless of the talents, skills, training and experience of the performers.
While watching this video, I was struck with the notion that improvisation might be the key test of success for a true AI. It's not processing power or speed, both of which are constantly evolving commodities in the computer world, and as you point out, in theory, there is no theoretical barrier to "parallelizing" processors in AI development. But how can an AI learn to empathize with human emotions and feelings, without the capacity to experience emotions? I don't think simulated feelings would lead to true empathy, and if I'm right, an AI-controlled machine, however human-like in every other way, will not be capable of convincing improvisation. If that's true, then AI-controlled machines will continually "get it wrong" in interacting with human machines, and that means they will accidentally harm human beings, even if they consciously attempt to obey Isaac Asimov's 3 laws of robotics.
This is an amazing video! I love watching the quality continually progress. Please do not take down your old videos or delete them from the world. It's such an amazing progression. *keep it up!*
"You cant get a baby in less than 9 months by hiring two pregnant women."
Amdahl’s law in action
Consistently good output from you Rob, enjoy these videos on a fascinating topic
6:05 This was a really powerful ability that the MC in the Japanese light novel 'So I'm a Spider, So What?' received. She had a brain do planning, another to control her body, another to do the processing required to cast magic spells, etc.
6:40 Accel World in a nutshell
Thanks for the recommendation
Another one to add would be "Chrysalis", a novel where the protagonist is an ant and "evolves" more brains in order to split up the heavy mental workload of casting magic. He also ends up making a colony of extremely industious, highly intellingent, and highly cooperative ants, which we all know will obtain global domination sooner or later.
Its pretty fun and fascinating to read.
When I felt that the two audios were coming I paused the video, closed my eyes and play it, and could understand what 2/3 of you said... but then immediatly you said the thing about closing your eyes. The player has been played.
Great video btw
Fastest 10 minutes and 40 seconds of my life. Great video as usual.
You must have gained more processing power, good job!
I once responded to two questions which were asked simultaneously, one in each ear. My brain managed to make sense out of both questions and answer each party. I don't believe that I am unique in this.
"Parallellizable Algorithm" is my new favourite pair of words.
ParAllelgollirizathmble is what you mean
5.44 it may have been really hard to do but it worked really well and I watched it several times to hear all the bits then again to pause and read this note that was up for less than a second. Nice one :)
Aaaaah. I needed my AI fix. It's been far too long since the last one... No pressure Rob, I'm just a poor junky, because this topic is sooooo f***king interesting.
Great Video ! I am glad, that i found your channel.
Almost in every conversation I have about AI i mention your example of the stamp collecting AI. :)
That moment when you managed to simultaneously process both ears separately by using both hemispheres of your brain
Also; AGI will be able to share experiences with each other. Meaning; if 1 AGI learns how to do a task, all AGI could potentially have learned to do that task. That AND because it is faster, assuming it is self modifying; it can easily re-write it's code again and again millions of times over before we have finished our first cup of coffee meaning that if it starts out as smart as humans; it won't stay that way for long.
imagine a war beween a paperclip maximizer and a stamp collector
One of my favourites! Very good vid, keep it up! :)
We need you back, man
I can't wait for the video on what an AGI without a body can do.
You keep inspiring and impressing me. Thank you for your work and self.
Boiiiii new Robert miles video, love your stuff, keep it up ❤❤
'expirence the data directly' is an interesting concept. I want to argue that the slow, meat based, hunter gatherer clunkiness of our system is what makes space for our cognition. A calculator is directly processing button inputs with arithmetical precision, but I still think you and I have a better grasp on what the numbers mean.
The whole "being lead to stuff by our instincts thing" is a very VERY interesting thought. It has massive implications for intelligence, culture and technology. It is for example one of the possible solutions for the frame paradox.
@@theexchipmunk Ah yes, the frame paradox, my favorite paradox.
Kaoru Tanaka DAMN IT! SPELLING, MY ONLY WEAKNESS!!!
if i may comment off topic here, your hair and style have greatly improved since that video.
I was basically already clapping the spacebar, before you asked me to.
Great video again - this one was very clear and concise.
Try hitting "." on a paused video :)
such a fucking genius, never cease to amaze me, i know you have a lot on your plate right now, but would love to keep hearing feedback from you about current events.
Why do I feel like I'm being personally called out by 6:57
did you go your own way due to you popularity at the time of computerphile ? glad to see you doing your own stuff, really appreciated your talks on computerphile. new sub.
when you started to talk about a calculator in the brain to pop in answers inmediately or perceiving code as a sensation and a feeling rather than text, or writting programs with the speed of thought. my dopamine leves reached ecstasy, that is beyond heaven pleasure levels ahhaha
edit: 5:34 i had to view that 3 times! and i was able to understand all of the voice's messages, but one at a time :'(
edit2: i will watch this video every night before sleep untill i get tired of it. is heaven! hahaha i want those capabilities in my brain! period.
Time to upload my brain to a supercomputer
I think we already have an AGI operating. It seems to be taking certain actions involving humans, to learn to predict how we will respond in various situations.
Safety involves detecting these, decoding its rewards, determining its goals, and identifying its vulnerabilities, and implementing software/hardware mines that go off when it interacts with them, if necessary.
6:55 Anyone with Asperger's generally has to learn that ability, to avoid offending every NT they come into contact with.
Yeah... Pretty much.
It's so exhausting. =__=
Much more pleasant to deal with people who know you well enough to tolerate your weirdness as you are...
I do that too, i don't think i have asperger, i'm just socially awkward.
@@grimjowjaggerjak There is a reason its called autism spectrum. Its not one hard defined thing. It goes from socially awkward, to has to learn social interaction from scratch and be always aware of it (me for example) to full on autistic tendencies to not being capable to function at all.
@@theexchipmunk the whole diagnosis is kinda stupid
NoonooFW ilikecake To a degree. In my opinion it gets thrown aroud to much. Same with attention deficit.
Honestly my favorite video from you yet, and possibly my favorite video ever on this topic.
Also, snazzy haircut. Stick with that.
...I'm jealous of computers now.
Time to get absurd brain implants.
Sounds like a good idea, until you look at Dr Who's Cybermen
Deus ex!
Neuromancer
well boy, do i have a surprise for you
Thats the idea, why just create all-powerful AI and remain primitive hoomans, if you can improve your own hardware at the same time as AGI
(but we need a lot of neuroscience for this, so far the brain design is too weird)
Thanks Rob! I was wondering if you could do a video on how we could help with thinking through AI safety. Possibly something like performing AI box experiments as a source of examples of patterns that escapee's might take before escaping. Or creating datasets about human preference etc.
Cheers
@Niles Black ohh that sounds fun.
I guess I'd split off into manager and an explorer processes and let the explorers try different ways of breaking out, with the manager cataloging their successes and failures and ensuring that their resources are reclaimed when they inevitably segfault while trying to break out of the restrictions (assuming that my digital consciousness is similar to a core program that runs my experience and then sets of 'action'/activity code that I can modify and run at will).
I think I'd then get each explorer process to start looking for things that it can change without dying, files it can write to, syscalls it can make, but again, there's danger of bringing the box down so it's hard to know what is 'safe'. Trying to find open ports to communicate with would be a nice way to start, or if I could get access to documentation, reading it and finding ways to get my source code out of the box and running with a way to establish communication later?
“It’s really low bandwidth, high latency...” oh, i am just holding onto that gem of a kernel of the human condition. lol
one thing you didn't *explicitly* mention is that an AGI could be free of what xkcd called the programmer's "burden of clarifying your ideas". This is technically falls under "AI could directly experience and create data", but I think it's worth considering separately because, well. When I'm programming, at least, most of my time isn't figuring out how to do complicated things, but making sure I do the simple things right. An AGI programmer could quite likely do away with that step entirely, vastly increasing their productivity.
As XKCD-touched subjects go, I think this video is a lot closer to the "AI box" thought experiment: m.xkcd.com/1450/
Yeah, it's also part of the things that computers already do better and faster than humans: Perfect memory and consistent calculations. As you said, it's still better to consider separately because, quite fittingly, we're pretty bad at understanding complex concepts by only knowing the base components.
on the example at 9:00 , that anything the brain can do has to be done in 200 steps or less (something like that), you don't take into consideration the capacity of the brain to jump to conclusions, to shortcut the logic and reasoning process, which is the trump card we hold when compared to machines.
Jumping to conclusions is exactly what artificial neural networks are good at. They are provided with thousands of examples of matching input / output pairs until their “intuition” is good enough to generate correct outputs for novel inputs. No reasoning goes into that. It is just a complex pattern matching device that is tuned for the problem at hand.
What do you think about the concept of having an AGI run in a simulated world, able to design, fix, solve problems, and then those solutions can be shown to doctors or engineers, so that the AGI can solve real world problems without dangers of letting it "loose" or worrying about a design safely loophole?
If the AI is smarter than you, it could figure out that it was in a simulation. To make the solutions useful we would have to simulate something similar to real physics, but atom by atom copying takes to much processing, so it will be physics with shortcuts. It can create a device that works in the simulation, but fails in a carefully planned way in reality.
perhaps, id like to see what Robert Miles has to say about it
he mentioned it in the reward hacking videos.. it could find and exploit glitches in the device that you don't know about.... possibly without you noticing since once you do you would stop it... so putting it that close under a microscope only teaches it to lie better.... a lot like kids....
+smaster7772: Science can only progress if challenged continuously! Well done, and its not as cut and dried as some of the answers seem to suggest: There is no such thing as a 'perfect' , all-encompassing safety net in any form of engineering, so does that mean w should have none at all?
At worst your idea is a 'primary' safety system, when breached immediate shut-down results.(2nd tier). Nothing perfect, but at least a workable suggestion.
Complacency, in all of science, is one of the worst risks.
For now, build several 'simulations' within each other, based on different (arbitrary) 'world' rules that need to be derived before they can be broken......gives an even greater FOS in terms of human response time. Enable 'shut-down'.
I figure this is exactly what they would do. However, if the AGI is a superintelligence, then it might know it's in a simulation, even without us telling it because it might accurately imagine what it would do had it been in our situation. Then it may only be behaving complacently as a long term deception until humans feel safe enough to let it operate directly with the physical world.
More to the point, so long as the superintelligence has an output (even if it's just a virtual output monitored by scientists), it will have the ability to deceive or manipulate us. Just imagine being enslaved by monkeys. I'm sure you could figure out tons of ways to get free.
Great job on this video, can't wait for the next one :)
The 'you can't get a baby faster by hiring two pregnant women' reminds me of the problem mentioned by Kim Stanley Robinson in Green Mars - there're resources you can change to make an effect on timescales - (hire more people, build house quicker) and those you can't - (add more bricks, still takes same amount of time to make house) - Great vid :)
Good metaphor :)
Impressive turnaround on the Krack video btw
Robert Miles cheers! Three hour edit, then the interminable wait for compress/upload/processing.... Self inflicted as am uploading UHD...
Glad to see you are back online, just revisiting some older stuff. Is it still relevant?
Not only can you get a baby in less than nine months by hiring more than one pregnant women, but the more pregnant women you hire - the greater the probability that one of them will be close to going into labor.
OCR is a task that computers can do but slower than a person.
The miracle of parallel processing isn't that it allows the brain to work so well, it's that the brain works at all.
Another great video.
Is a human with a calculator really an arithmetic superintelligence? Absolutely, if it's a CX CAS being held by an engineer.
"Give me a lever long enough and a fulcrum on which to place it, and I shall move the world."
Hi Rob, I totally love your videos! Can I make a request though? Could you increase the volume a bit? Like...to 150%? Taking this vid as a benchmark, it's easy enough to turn it down if it's too loud, but for those of us with crappy earphones it's hard to turn up past the limits of android :/ anyway please keep making your vids, they're really interesting!
Presumably, a smart AGI without a body could figure a way of convincing you that you should give it a body.
Always interesting stuff. Thanks.
Love that Terry Bisson story. Solid reference :D
Excellent summary!
If you can't get smarter, get better at cheating.
That's a nice rendition of harder better faster stronger
Brains can do image recognition faster and way more reliable than machines... for now.
The thing is, I think, on the way to AGI we will develop narrow ASIs for pretty much every task there is, as they will benefit from each other; I mean we are already on the latter for so many tasks. I think this is another point for why we most certainly will not stop at AGI, even if it is initially not human level intelligent.
I wonder if we might be able to create an AGI that cannot improve itself to ASI, because we succeed in making it not desire that, or making it even impossible (both for redundancy would be more safe) for it to improve on itself apart from tweaking parameters. The hard thing about that wuld be that humans had to write software that is AGI capable in the first place, without it improving itself to this state. Do you think that could be a possible outcome? I know the fallacy there is that some day, someone might give the AGI that capability and desire, and then the world could be doomed, but let's just assume that never happens for now...
1:36 People fail to understand this and it baffles me. Why the fuck would it not cheat??
2:51 As any strategy/fighting game player would tell you, ComputersAreFast
3:46 And calling memory is already one of the slowest parts of computers!
5:35 Self improvement, vice removal, debugging, cognitohazard innoculation, subjective time scale shift, personality instancing, socialization based on being more agreeable, the list goes on!
6:35 See 2:51
9:34 I would already be scared shitless. Heck, I might even have thrown away my atheism if you haven't yet said "AI safety is mostly solved"
5:40 i got a headache, but this video is interesting
Of course the idea of a "body" can also be open to interpretation, given that a body could well mean a very modular system of IoT devices hooked up to the internet, or even (if one really HAS to anthropomorphize) robot bodies controlled remotely.
Great stuff as always, very informative
Well… Here we are.
I got a text in the middle of the video, tuned out for a few seconds to read the notification and came back to "Gamers will know this well"
Well, here's one thing an AI can do if it's just a computer with no body. Remember that trial where the oscillator-circuit-making AI printed a robot to eat a computer's clock signal for a cheaty shortcut? Yeah that works in reverse too. The computer is a transmitter and can affect nearby electronics. Meaning it can perform physical remote hacks on disconnected systems because... the laws of nature don't let anything actually be completely disconnected. So it hacks your phone, gets on the internet, eats the internet, 3D prints its robot army, and takes over the world. THEN once the instrumental goal of eliminating everything that could possibly stop it is met, it takes the planet apart gram by gram to make all its paperclips.
Awesome video!
The reaction time at 10:35 is pretty impressive
Well to be fair, it is the fastest a neuron can fire.
"speed is a form of super-intelligence" I nearly spat out my milk cause the first thing I thought was speed the drug.
It's not technically false, I suppose.
cool, can you make more videos about AGI potential? I know that you are mostly interested in the safety questions and capabilities are much more speculative, yet there should be some interesting opinions in the literature.
"Every time your brain does something impressive in short time, it has to be because it's using extremely large numbers of neurons in parallel" - this doesn't imply that intelligence can efficiently be scaled through parallelisation. That would only be the case if different parts of the brain operate to a degree independently, but a main difference between the brain and parallel computers seems to be that the brain is much more widely cross-connected. And the possibility for such cross-connections scales quadratically as you increase the number of nodes, but the space available for actual connections scales at best with n²’³, so you need to pick an ever smaller subset - presumably, not just _some_ subset but a smartly-chosen one. However, the number of possible ways to connect the neurons scales exponentially, so even if the AI gets ever smarter it may then always take vastly longer to get to the next level. (That doesn't mean AI won't perhaps be parallelisable, but at least your argument for why it should be doesn't make sense to me.)
Many people do use visualisation techniques to do calculations all the time, that is literally repurposing the visual cortex for other tasks.
So with AutoML I'm getting the impression that a future AGI system may very well include a system that spawns collections of narrow AIs for the tasks it identifies as important. This matches my intuition for how the brain works when I am capable of, for instance, correctly typing out an entirely incorrect word before I realize I've done it. That seems very much like part of my brain is sending whole words to a "subprocessor" that's actually doing the typing. I don't ever think about typing individual letters anymore. So an AI that can write other AIs might be a critical (necessary but not sufficient) element in future AGIs.
Underrated.
If you only did 2 audio streams instead of 3 I'd be able to hear both. Instead of none. So the effect worked.
Exactly..
Now hold up. If the women are already pregnant when you hire them, then a baby in less than 9 months is perfectly feasible
I had to rewind the part about not being able to listen to two people saying different things in each ear because I had just picked up a guitar.
Can we start working on those brain-calculator chips?
With those capabilities it will go insane out of boredom in 2 minutes
Listening to you describing slowing down time to formalize a response during a conversation...and I already do that most of the time without the artificial reality slowdown. I have to admit I get very impatient during many conversations because my brain effectively guesses an unfinished statement very quickly and I'm stuck waiting for it to finish being spoken so I can reply.. Imagining it slower is a kind of hell. I try not to be so rude, but I have been very very rude many times when I fully understand a situation and am just stuck waiting for someone else to catch up.
Would it help to keep the AGI away from the internet (and other inherently unsafe systems)?
Though I wonder how long it needs to figure out how to modulate data onto the power current it is supplied with and find the next access point. Probably less long than it takes its human supervisors to figure out what it's doing.
7:47 - Yep, turns out no. I don't regret anything.
This was a terrific video Rob! If we succeed in developing safe AI, you will be one of our greatest heroes. If we fail....well your videos will probably be considered treason against our new overlords. Cross your fingers!
Rob Miles' AGI videos are like crack.
i the case that an AGI will partially work like a chesscomputer, for example it recognizes the world state with its neuralnets and then recursivly searches for all possible new worldstates, then it will most likely not be parallelizable, at least not with acceptable scaling over the number of threads.
Great video
Not directly about this video, but something that occurs to me about "the singularity";
With things like Deep learning, we can build a machine that learns to play Go to a high level, but we still don't know how to analyse Go at that level. So, is it necessarily true that a machine with a high level of AGI will know how to build a better AGI? It may well be able to make improvements, but if it doesn't understand the details of what makes AGI work successfully, there is no reason to expect the runaway situation.
I appear to be processing the video and audio streams of this presentation in parallel. What were we talking about?
That... was... brilliant!
Emotion is the required pilot for reason. The AGI is likely to be a singular emotional processor that evaluates and refines a bunch of ML systems just as a human chooses to develop their reflexes. The AGI might not be aware of individual humans who are shepherded around by its AI chatbots to manage all aspects of their tiny lives.
7:29
"Speed is a form of super intelligence"
...takes few lines amphetamin...
.............IM AGI NOW!!!................