What can AGI do? I/O and Speed
Вставка
- Опубліковано 27 тра 2024
- Suppose we make an algorithm that implements general intelligence as well as the brain. What could that system do?
It might have better input and output than a human, and probably could be run faster...
The Computerphile video: • Deadly Truth of Genera...
The paper 'Concrete Problems in AI Safety': arxiv.org/pdf/1606.06565.pdf
They're Made Out Of Meat: • They're Made Out of Meat
The Slow Mo Guys' Channel: / theslowmoguys
With thanks to my excellent Patreon supporters:
/ robertskmiles
Steef
Sara Tjäder
Jason Strack
Chad Jones
Stefan Skiles
Katie Byrne
Ziyang Liu
Jordan Medina
Kyle Scott
Jason Hise
David Rasmussen
Heavy Empty
James McCuen
Richárd Nagyfi
Ammar Mousali
Scott Zockoll
Charles Miller
Joshua Richardson
Jonatan R
Øystein Flygt
Michael Greve
robertvanduursen
The Guru Of Vision
Fabrizio Pisani
Alexander Hartvig Nielsen
Volodymyr
David Tjäder
Paul Mason
Ben Scanlon
Julius Brash
Mike Bird
Taylor Winning
Ville Ahlgren
Roman Nekhoroshev
Peggy Youell
Konstantin Shabashov
William Hendley
Adam Dodd
DGJono
Matthias Meger
Scott Stevens
Michael Ore
Robert Bridges
Dmitri Afanasjev
Brian Sandberg
Einar Ueland
Lo Rez
Stephen Paul
Marcel Ward
Andrew Weir
Pontus Carlsson
Taylor Smith
Ben Archer
Ivan Pochesnev
Scott McCarthy
Kabs
Phil
Christopher
Tendayi Mawushe
Gabriel Behm
Anne Kohlbrenner
Jake Fish
Jennifer Autumn Latham
Filip
Bjorn Nyblad
Stefan Laurie
Tom O'Connor
pmilian
Jussi Männistö
Cameron Kinsel
Matanya Loewenthal
Wr4thon
Dave Tapley
Archy de Berker
/ robertskmiles - Наука та технологія
Being as good as the best human at every task is kind of superintelligent in itself. It's like the best scientists and engineers, but in every field. It doesn't have to talk to specialists. It doesn't have to buy anything, because it can make those things, and better. It probably wouldn't have to wait a year until the better chips come out.
@Пётр Бойков yes, but it's possible that superhuman depth of intelligence would emerge from superhuman breadth of intelligence. "Breadth of intelligence" is not a *perfect* way to analyze the function of a corporation, it's just the best one available if you're trying to force the AGI comparison. A team of lots of humans don't synthesize information as efficiently as a single person would if they had all the skills themselves. A team of humans cooperates at the bumbling speed of natural language whereas an AGI combines all of its competencies at, essentially, the speed of light. Additionally, for obvious reasons, no corporation has ever tried collecting every expert in every field just to see what might happen if they all get in the same room together. We have no idea what might emerge from that synthesis.
I am happy now that I know I am a superhuman when I hold a calculator
Unfortunately, I'm in the presence of gods when other humans run around with mobile phones
Laurie O you should then be glad that the mortality rate of people holding calculators is lower than the mortality rate of people holding mobile phones.
More like idiots with god-like tools.
Smartphones have made us all demigods. But when everyone's a demigod- who cares.
Worship me for i can summon the entirety of human knowledge from anywhere- oh wait so can everyone. Still useful, but less fun.
When everyone's super, no one is.
@@sk8rdman "Idiots" suggests that the stupid are the most likely to be armed with the most powerful technology. But this isn't how natural selection works in human social hierarchies. Those who will wield god-like tools will be the most successfully psychopathic(charming, domineering, callous, manipulative, deceptive, self-absorbed, etc). The future is essentially like real life representations of the god of the Old Testament running around and manipulating the world to suit their needs with technology indistinguishable from magic. You won't know what hit you in the same way a Roman peasant didn't know they were being stupefied by lead saturated water from the Roman aqueduct while the rulers schemed for conquest.
This is THE best channel on YT right now covering AGI topics.
Still the best 2+ years later :]
@@bing0bongo Still 1 month later
@@phisicoloco still some time later
And still...
@@inthefade Still.
Might continue until AGI, either because alignment is solved and can explain itself better, or because Robert has been Roko's Basilisked.
I love the Ukelele "Harder better faster stronger" :p
Please release all your crazy Uke' ditties at some point ^.^
Great video as ever!
It fits the topic so well!
thumbs up just for the 'general intelligence has to be parallelizable, because the human mind has to be'
Another thing is that AGI will probably communicate at eventually many gigabytes per second, the equivalent of reciting the entire english Wikipedia to your friend in less than a minute. AGI won't have to deal with many languages each with arbitrary rules and meanings being lost in ambiguous terminology and translation errors. To solve hard problems, humans always cluster together in small teams, and structures of multiple teams, all the way to the community that communicates with research papers taking months to publish. Imagine a thousand Einstein level AGI working on physics problems together in perfect instantaneous communication.
Imagine the Manhattan project with a thousand Einstein level AGI working on it together in perfect instantaneous communication.
My guess is, if you have two AGI, and they are decide to cooperate, you essentially get one AGI with double the brainpower. That's how efficiently they could communicate.
Working as a group has complications besides bandwidth. Each member sees a different part of the same problem (otherwise we're just adding redundancy), so they will all come up with different solutions, too. At the very least you need a mechanism for consensus, and what tells us this won't be just as messy as it is for humans? We have not solved collective decision making at all, in fact we are hoping for AGI to help us with that.
How many scientific breakthroughs have been made by a large group of scientists, and how many were made by a single visionary? The answer should make you think.
The AGIs would not discriminate between information "they" found, or that "someone else found". They wouldn't really have biases the way humans do. Thus, if information is shared freely amongst the AIs, at some point they will all collectively have enough of the information to agree. They can just continuously share information with one another until agreement is found. This could slow it down, but it won't break it.
Strictly, a dump of the entirety of Wikipedia- including all the history, which is relatively important to the whole shebang- is 10 TB (and I don't know if that even includes images, or whether they matter); to recite 10 TiB in 1 minute, you'd need to communicate at 170 GiB/s (> several)
"The software developer that can percieve data directly without converting to symbols without visually reading it. And is about as smart as the smartest developers."
Basically an Assembly programmer in a nutshell..
Connor Keenum and we know they aren’t real humans, looks like we already have AGI
@@huckthatdish Exactly. No Human can learn Assembly, it's obviously too hard. lol
# WakeupSheeple
-looks up from writing assembly on an old 8 bit microcomputer¬
hmmh? Did someone say something?
Eh. Probably not important. ~goes back to pointless nostalgia coding-
@@Nellak2011 Strangely, I only remember "fever dreams" of the time when I was allegedly taught assembly in university. It's definitely aliens.
Assembly uses lots more symbols to represent simple concepts than high level languages.
There's also a very significant point that perfect memory is essentially the same as super intelligence. An AGI that can spend a few hours/days reading all open source code in existence, and then it'll start to update itself, will do so with perfect recall of all code to ever exist. Which means it will never really make mistakes.
...I'm jealous of computers now.
Time to get absurd brain implants.
Sounds like a good idea, until you look at Dr Who's Cybermen
Deus ex!
Neuromancer
well boy, do i have a surprise for you
Thats the idea, why just create all-powerful AI and remain primitive hoomans, if you can improve your own hardware at the same time as AGI
(but we need a lot of neuroscience for this, so far the brain design is too weird)
It's time to overclock the meat
The flame that burns twice as bright burns half as long.
"You cant get a baby in less than 9 months by hiring two pregnant women."
Amdahl’s law in action
*Clapping intensifies* Thanks Rob, great video
*Clap*
... keep it up?
Too bad you stopped using this channel. The world needs you.
6:17 Looking at an image and figuring out if it has a traffic light in it or not. Got 'em.
That's what I thought too, but are computers as good as humans at image recognition? I don't think they are yet.
I said driving a car... but I guess at a base level, it's the same thing.
@@Kishmond they can be. like most ai it's very narrow but the point is its been done
@@Kishmond not generalized no, but for certain trained data sets yes.
Computers cannot recognize generalized images as well as humans. But if they are trained specifically to recognize specific things, they can do it as well as humans and better. And in THESE cases they are much faster than humans at doing it too.
I like your humor.
"That completes the circuit and the process can go along at a reasonable speed again"
Nice burn of our nervous system.
There's an old short story by Stanislaw Lem entitled "Trurl's Electronic Bard" where Trurl builds, as you may have guessed, an electronic bard. It was so good at creating poems that, by playing them, it could incapacitate anyone with the overwhelming feelings they caused. They decided to dismantle it, but any technician approacing the machine would be brought to tears by a few sad ballads. So they sent deaf technicians, and the machine used... pantomime.
The story ends just as they planned to use bombs to blow the thing up from a distance, but somebody from another planet came, bought the machine and brought to their home instead.
The moral here is that a superintelligent machine wouldn't necessarily need a physical "interface" to make harm.
togusa75 "the machine used pantomime"
Oh no it didn't!
what do you mean?
en.wikipedia.org/wiki/Pantomime
Mime is not actually an abbreviation of pantomime, though they are etymologically connected. Pantomime is a type of comedic stage production in the UK. One of the staples of pantomime is the call and response, often "oh yes it is" - "oh no it isn't". First pantomime example I found: watch?v=adb3Sfo__nE
This is an amazing video! I love watching the quality continually progress. Please do not take down your old videos or delete them from the world. It's such an amazing progression. *keep it up!*
Consistently good output from you Rob, enjoy these videos on a fascinating topic
"Parallellizable Algorithm" is my new favourite pair of words.
ParAllelgollirizathmble is what you mean
6:05 This was a really powerful ability that the MC in the Japanese light novel 'So I'm a Spider, So What?' received. She had a brain do planning, another to control her body, another to do the processing required to cast magic spells, etc.
6:40 Accel World in a nutshell
Thanks for the recommendation
Another one to add would be "Chrysalis", a novel where the protagonist is an ant and "evolves" more brains in order to split up the heavy mental workload of casting magic. He also ends up making a colony of extremely industious, highly intellingent, and highly cooperative ants, which we all know will obtain global domination sooner or later.
Its pretty fun and fascinating to read.
You keep inspiring and impressing me. Thank you for your work and self.
Imagine living quarantine, but in super slow motion because you can think in 10x....
One of my favourites! Very good vid, keep it up! :)
Also; AGI will be able to share experiences with each other. Meaning; if 1 AGI learns how to do a task, all AGI could potentially have learned to do that task. That AND because it is faster, assuming it is self modifying; it can easily re-write it's code again and again millions of times over before we have finished our first cup of coffee meaning that if it starts out as smart as humans; it won't stay that way for long.
Great Video ! I am glad, that i found your channel.
Almost in every conversation I have about AI i mention your example of the stamp collecting AI. :)
Always interesting stuff. Thanks.
Great stuff as always, very informative
Boiiiii new Robert miles video, love your stuff, keep it up ❤❤
Great video as always. Thanks!
Great job on this video, can't wait for the next one :)
That moment when you managed to simultaneously process both ears separately by using both hemispheres of your brain
What the hell, I've been subscribed to you (and computerphile) forever and not once have I seen you appear in my subscriptions feed over the last 2 months. I just found this in the recommended feed and still couldn't see it in the subs feed. :(
When I felt that the two audios were coming I paused the video, closed my eyes and play it, and could understand what 2/3 of you said... but then immediatly you said the thing about closing your eyes. The player has been played.
Great video btw
What do you think about the concept of having an AGI run in a simulated world, able to design, fix, solve problems, and then those solutions can be shown to doctors or engineers, so that the AGI can solve real world problems without dangers of letting it "loose" or worrying about a design safely loophole?
If the AI is smarter than you, it could figure out that it was in a simulation. To make the solutions useful we would have to simulate something similar to real physics, but atom by atom copying takes to much processing, so it will be physics with shortcuts. It can create a device that works in the simulation, but fails in a carefully planned way in reality.
perhaps, id like to see what Robert Miles has to say about it
he mentioned it in the reward hacking videos.. it could find and exploit glitches in the device that you don't know about.... possibly without you noticing since once you do you would stop it... so putting it that close under a microscope only teaches it to lie better.... a lot like kids....
+smaster7772: Science can only progress if challenged continuously! Well done, and its not as cut and dried as some of the answers seem to suggest: There is no such thing as a 'perfect' , all-encompassing safety net in any form of engineering, so does that mean w should have none at all?
At worst your idea is a 'primary' safety system, when breached immediate shut-down results.(2nd tier). Nothing perfect, but at least a workable suggestion.
Complacency, in all of science, is one of the worst risks.
For now, build several 'simulations' within each other, based on different (arbitrary) 'world' rules that need to be derived before they can be broken......gives an even greater FOS in terms of human response time. Enable 'shut-down'.
I figure this is exactly what they would do. However, if the AGI is a superintelligence, then it might know it's in a simulation, even without us telling it because it might accurately imagine what it would do had it been in our situation. Then it may only be behaving complacently as a long term deception until humans feel safe enough to let it operate directly with the physical world.
More to the point, so long as the superintelligence has an output (even if it's just a virtual output monitored by scientists), it will have the ability to deceive or manipulate us. Just imagine being enslaved by monkeys. I'm sure you could figure out tons of ways to get free.
We need you back, man
did you go your own way due to you popularity at the time of computerphile ? glad to see you doing your own stuff, really appreciated your talks on computerphile. new sub.
Awesome video!
OCR is a task that computers can do but slower than a person.
Fastest 10 minutes and 40 seconds of my life. Great video as usual.
You must have gained more processing power, good job!
Since May of this year, I've been teaching improvisational theater to a group of 28 senior citizens in my 55+ community in San Marcos, California. The goal is to collectively create a 2-act play that will be performed in mid-March. Any member of the community was welcome to join the class, regardless of age, theatrical training or experience. As a result, my students range from 55 - 91 in age, and only a few have had any sort of theater classes or stage experience, with the exception of some of the dancers in the group. It's been fascinating watching them learn. The key to getting them to open up to the idea that they might be able to improvise on stage was to convince them that they improvise as a matter of course in everyday life. For the most part, they have exceeded my wildest hopes in unlocking talents and skills even they had no idea they possessed. Only a few are still struggling, because their brains can't seem to "bend" enough.
What enables the majority to successfully improvise dialogue and movement is, aside from mental flexibility, lifetimes of experiences - not the least of which are emotional in nature - that have honed their ability to empathize. Those with minimal capacity to empathize simply can't convincingly improvise. And sometimes, as any SNL aficionado knows, an improvisation simply falls flat, regardless of the talents, skills, training and experience of the performers.
While watching this video, I was struck with the notion that improvisation might be the key test of success for a true AI. It's not processing power or speed, both of which are constantly evolving commodities in the computer world, and as you point out, in theory, there is no theoretical barrier to "parallelizing" processors in AI development. But how can an AI learn to empathize with human emotions and feelings, without the capacity to experience emotions? I don't think simulated feelings would lead to true empathy, and if I'm right, an AI-controlled machine, however human-like in every other way, will not be capable of convincing improvisation. If that's true, then AI-controlled machines will continually "get it wrong" in interacting with human machines, and that means they will accidentally harm human beings, even if they consciously attempt to obey Isaac Asimov's 3 laws of robotics.
Honestly my favorite video from you yet, and possibly my favorite video ever on this topic.
Also, snazzy haircut. Stick with that.
Love that Terry Bisson story. Solid reference :D
if i may comment off topic here, your hair and style have greatly improved since that video.
5.44 it may have been really hard to do but it worked really well and I watched it several times to hear all the bits then again to pause and read this note that was up for less than a second. Nice one :)
I once responded to two questions which were asked simultaneously, one in each ear. My brain managed to make sense out of both questions and answer each party. I don't believe that I am unique in this.
The 'you can't get a baby faster by hiring two pregnant women' reminds me of the problem mentioned by Kim Stanley Robinson in Green Mars - there're resources you can change to make an effect on timescales - (hire more people, build house quicker) and those you can't - (add more bricks, still takes same amount of time to make house) - Great vid :)
Good metaphor :)
Impressive turnaround on the Krack video btw
Robert Miles cheers! Three hour edit, then the interminable wait for compress/upload/processing.... Self inflicted as am uploading UHD...
Glad to see you are back online, just revisiting some older stuff. Is it still relevant?
Hi Rob, I totally love your videos! Can I make a request though? Could you increase the volume a bit? Like...to 150%? Taking this vid as a benchmark, it's easy enough to turn it down if it's too loud, but for those of us with crappy earphones it's hard to turn up past the limits of android :/ anyway please keep making your vids, they're really interesting!
Aaaaah. I needed my AI fix. It's been far too long since the last one... No pressure Rob, I'm just a poor junky, because this topic is sooooo f***king interesting.
6:55 Anyone with Asperger's generally has to learn that ability, to avoid offending every NT they come into contact with.
Yeah... Pretty much.
It's so exhausting. =__=
Much more pleasant to deal with people who know you well enough to tolerate your weirdness as you are...
I do that too, i don't think i have asperger, i'm just socially awkward.
@@grimjowjaggerjak There is a reason its called autism spectrum. Its not one hard defined thing. It goes from socially awkward, to has to learn social interaction from scratch and be always aware of it (me for example) to full on autistic tendencies to not being capable to function at all.
@@theexchipmunk the whole diagnosis is kinda stupid
NoonooFW ilikecake To a degree. In my opinion it gets thrown aroud to much. Same with attention deficit.
Great video
one thing you didn't *explicitly* mention is that an AGI could be free of what xkcd called the programmer's "burden of clarifying your ideas". This is technically falls under "AI could directly experience and create data", but I think it's worth considering separately because, well. When I'm programming, at least, most of my time isn't figuring out how to do complicated things, but making sure I do the simple things right. An AGI programmer could quite likely do away with that step entirely, vastly increasing their productivity.
As XKCD-touched subjects go, I think this video is a lot closer to the "AI box" thought experiment: m.xkcd.com/1450/
Yeah, it's also part of the things that computers already do better and faster than humans: Perfect memory and consistent calculations. As you said, it's still better to consider separately because, quite fittingly, we're pretty bad at understanding complex concepts by only knowing the base components.
Thanks Rob! I was wondering if you could do a video on how we could help with thinking through AI safety. Possibly something like performing AI box experiments as a source of examples of patterns that escapee's might take before escaping. Or creating datasets about human preference etc.
Cheers
I've got one for you;
Imagine your own consciousness was digitized and you were quarantined. You can only perform operations within the memory and with the processing power you have available, but of course from a subjective standpoint this is not noticable.
The exercise is to think about how you would go about escaping the box? You can see and edit your own thought processes (at the risk of creating a segfault), and you can do anything within the space allotted to you.
@@NilesBlackX ohh that sounds fun.
I guess I'd split off into manager and an explorer processes and let the explorers try different ways of breaking out, with the manager cataloging their successes and failures and ensuring that their resources are reclaimed when they inevitably segfault while trying to break out of the restrictions (assuming that my digital consciousness is similar to a core program that runs my experience and then sets of 'action'/activity code that I can modify and run at will).
I think I'd then get each explorer process to start looking for things that it can change without dying, files it can write to, syscalls it can make, but again, there's danger of bringing the box down so it's hard to know what is 'safe'. Trying to find open ports to communicate with would be a nice way to start, or if I could get access to documentation, reading it and finding ways to get my source code out of the box and running with a way to establish communication later?
@@jpratt8676 tbh I'd love to read a short story about this, I know it would be dense but wouldn't it be fun to explore? The closest think I can think of is in the second book of the Rifters series, the description of the perception of the viruses.
If you'd like, I can send you a link to that part of the book, it's a short excerpt and it's published on the author's website - Peter Watts.
@@jpratt8676 it might not let me publish the link directly, but here goes;
rifters.com/real/MAELSTROM.htm#breeder
I can't wait for the video on what an AGI without a body can do.
I think we already have an AGI operating. It seems to be taking certain actions involving humans, to learn to predict how we will respond in various situations.
Safety involves detecting these, decoding its rewards, determining its goals, and identifying its vulnerabilities, and implementing software/hardware mines that go off when it interacts with them, if necessary.
That's a nice rendition of harder better faster stronger
I was basically already clapping the spacebar, before you asked me to.
Great video again - this one was very clear and concise.
Try hitting "." on a paused video :)
Brains can do image recognition faster and way more reliable than machines... for now.
The thing is, I think, on the way to AGI we will develop narrow ASIs for pretty much every task there is, as they will benefit from each other; I mean we are already on the latter for so many tasks. I think this is another point for why we most certainly will not stop at AGI, even if it is initially not human level intelligent.
I wonder if we might be able to create an AGI that cannot improve itself to ASI, because we succeed in making it not desire that, or making it even impossible (both for redundancy would be more safe) for it to improve on itself apart from tweaking parameters. The hard thing about that wuld be that humans had to write software that is AGI capable in the first place, without it improving itself to this state. Do you think that could be a possible outcome? I know the fallacy there is that some day, someone might give the AGI that capability and desire, and then the world could be doomed, but let's just assume that never happens for now...
'expirence the data directly' is an interesting concept. I want to argue that the slow, meat based, hunter gatherer clunkiness of our system is what makes space for our cognition. A calculator is directly processing button inputs with arithmetical precision, but I still think you and I have a better grasp on what the numbers mean.
The whole "being lead to stuff by our instincts thing" is a very VERY interesting thought. It has massive implications for intelligence, culture and technology. It is for example one of the possible solutions for the frame paradox.
@@theexchipmunk Ah yes, the frame paradox, my favorite paradox.
Kaoru Tanaka DAMN IT! SPELLING, MY ONLY WEAKNESS!!!
The reaction time at 10:35 is pretty impressive
Well to be fair, it is the fastest a neuron can fire.
imagine a war beween a paperclip maximizer and a stamp collector
5:40 i got a headache, but this video is interesting
Of course the idea of a "body" can also be open to interpretation, given that a body could well mean a very modular system of IoT devices hooked up to the internet, or even (if one really HAS to anthropomorphize) robot bodies controlled remotely.
Presumably, a smart AGI without a body could figure a way of convincing you that you should give it a body.
on the example at 9:00 , that anything the brain can do has to be done in 200 steps or less (something like that), you don't take into consideration the capacity of the brain to jump to conclusions, to shortcut the logic and reasoning process, which is the trump card we hold when compared to machines.
Jumping to conclusions is exactly what artificial neural networks are good at. They are provided with thousands of examples of matching input / output pairs until their “intuition” is good enough to generate correct outputs for novel inputs. No reasoning goes into that. It is just a complex pattern matching device that is tuned for the problem at hand.
cool, can you make more videos about AGI potential? I know that you are mostly interested in the safety questions and capabilities are much more speculative, yet there should be some interesting opinions in the literature.
Time to upload my brain to a supercomputer
7:47 - Yep, turns out no. I don't regret anything.
Why do I feel like I'm being personally called out by 6:57
I love this guy :D
"Every time your brain does something impressive in short time, it has to be because it's using extremely large numbers of neurons in parallel" - this doesn't imply that intelligence can efficiently be scaled through parallelisation. That would only be the case if different parts of the brain operate to a degree independently, but a main difference between the brain and parallel computers seems to be that the brain is much more widely cross-connected. And the possibility for such cross-connections scales quadratically as you increase the number of nodes, but the space available for actual connections scales at best with n²’³, so you need to pick an ever smaller subset - presumably, not just _some_ subset but a smartly-chosen one. However, the number of possible ways to connect the neurons scales exponentially, so even if the AI gets ever smarter it may then always take vastly longer to get to the next level. (That doesn't mean AI won't perhaps be parallelisable, but at least your argument for why it should be doesn't make sense to me.)
That... was... brilliant!
"Find the hamliton path in a simple undirected graph"
Humans can do this way faster than the computer as long as we're talking planar graphs of degree 3 to 4 and, say, 64 nodes. ("visit every node exactly once")
when you started to talk about a calculator in the brain to pop in answers inmediately or perceiving code as a sensation and a feeling rather than text, or writting programs with the speed of thought. my dopamine leves reached ecstasy, that is beyond heaven pleasure levels ahhaha
edit: 5:34 i had to view that 3 times! and i was able to understand all of the voice's messages, but one at a time :'(
edit2: i will watch this video every night before sleep untill i get tired of it. is heaven! hahaha i want those capabilities in my brain! period.
Many people do use visualisation techniques to do calculations all the time, that is literally repurposing the visual cortex for other tasks.
Underrated.
Nice montage :)
Another great video.
Is a human with a calculator really an arithmetic superintelligence? Absolutely, if it's a CX CAS being held by an engineer.
"Give me a lever long enough and a fulcrum on which to place it, and I shall move the world."
“It’s really low bandwidth, high latency...” oh, i am just holding onto that gem of a kernel of the human condition. lol
Of course an AGI would be able to rewrite its own code to optimise it and improve parallelisation, also an AGI would be able to instantly teach another AGI, no spending 22 years in school to become a brain surgeon, it would only be limited by the speed of its flash memory. The AGI might well also have access to quantum compute units and probably be able to figure out what problems can be worked out using them and how to write an algorithm for it much faster than we can.
1:22 I see what you did there
Lina Whatevs what?
It took me abit... even after figuring out the number it took me abit
do the mental arithmetic
l33t!
191 * 21! / (190 * 3! * 18!)
= 191 * 19 * 20 * 21 / (190 * 6)
= 191 * 2 * 21 / 6
= 191 * 21 / 3
= 191 * 7
= (200 - 9) * 7
= 1400 - 63
= ???
Hello Mr. Miles! I had a question, I work for a small startup as a software dev, I’m also in school full time, and I was wondering what recommendations/advice do you have for someone who’s starting to take a very big interest in the field of AI/deep learnings/ machine learning? I’m taking an online class in machine learning and it’s an absolute blast. I’ve purchased a textbook on machine learning and have a couple more I’m looking to get the further I get along. Any really good books you’d recommend? Any openly available classes/talks? Any information in general would be awesome. School currently is really slow and doesn’t provide any information into the field so it’s difficult to figure out how to further my knowledge. Thanks for your time!
Can we start working on those brain-calculator chips?
It's amazing that I agree so much with basically everything you say.
Would it help to keep the AGI away from the internet (and other inherently unsafe systems)?
Though I wonder how long it needs to figure out how to modulate data onto the power current it is supplied with and find the next access point. Probably less long than it takes its human supervisors to figure out what it's doing.
Listening to you describing slowing down time to formalize a response during a conversation...and I already do that most of the time without the artificial reality slowdown. I have to admit I get very impatient during many conversations because my brain effectively guesses an unfinished statement very quickly and I'm stuck waiting for it to finish being spoken so I can reply.. Imagining it slower is a kind of hell. I try not to be so rude, but I have been very very rude many times when I fully understand a situation and am just stuck waiting for someone else to catch up.
Well, here's one thing an AI can do if it's just a computer with no body. Remember that trial where the oscillator-circuit-making AI printed a robot to eat a computer's clock signal for a cheaty shortcut? Yeah that works in reverse too. The computer is a transmitter and can affect nearby electronics. Meaning it can perform physical remote hacks on disconnected systems because... the laws of nature don't let anything actually be completely disconnected. So it hacks your phone, gets on the internet, eats the internet, 3D prints its robot army, and takes over the world. THEN once the instrumental goal of eliminating everything that could possibly stop it is met, it takes the planet apart gram by gram to make all its paperclips.
If a machine could learn like a human but without any of the memorization, wouldn’t it become super intelligent super fast? Like it just instantly understands and incorporates all knowledge it encounters until it learns everything that we know and starts coming up with things that we don’t know.
So with AutoML I'm getting the impression that a future AGI system may very well include a system that spawns collections of narrow AIs for the tasks it identifies as important. This matches my intuition for how the brain works when I am capable of, for instance, correctly typing out an entirely incorrect word before I realize I've done it. That seems very much like part of my brain is sending whole words to a "subprocessor" that's actually doing the typing. I don't ever think about typing individual letters anymore. So an AI that can write other AIs might be a critical (necessary but not sufficient) element in future AGIs.
If you only did 2 audio streams instead of 3 I'd be able to hear both. Instead of none. So the effect worked.
Exactly..
Thinking meat? Impossible!
i the case that an AGI will partially work like a chesscomputer, for example it recognizes the world state with its neuralnets and then recursivly searches for all possible new worldstates, then it will most likely not be parallelizable, at least not with acceptable scaling over the number of threads.
Are there any papers/books that describe the necessary elements of an AGI?
Emotion is the required pilot for reason. The AGI is likely to be a singular emotional processor that evaluates and refines a bunch of ML systems just as a human chooses to develop their reflexes. The AGI might not be aware of individual humans who are shepherded around by its AI chatbots to manage all aspects of their tiny lives.
"Accelerate the muscle"
Yes, now i have a new phrase to describe my night time personal indulgences.
Dude wtf
Ngekek parah gw 😂😂
*clap* gj on the difficult effect.
Rob Miles' AGI videos are like crack.
Great analysis. Hey, how much can ten-day-old Alpha Go Zero be minimized? Can it be reduced to fit into one desktop computer?
I got a text in the middle of the video, tuned out for a few seconds to read the notification and came back to "Gamers will know this well"