During the video he mentions speech recognition AI has reached human-like levels of performance. Remember those automated UA-cam captions we all used to laugh at? I suggest you try those captions again. This remarkable progress isn't just academic, it's actually being deployed right now.
I am one of the odd few that never laughed at the early attempts. I just loved and marveled at every step of the way. I'm still in absolute awe at the way Googles A.I has advanced. I often go to image search in Google and just try and fool it. Search for the most obscure things and boom, it finds pictures of them. I used to try and show that to people as evidence how far Google's A.I had come but almost always I'd have an argument about how it wasn't getting the information through metadata or picture filenames. It was and is seeing the pictures as we do.
Their image search is indeed another area where their AI is present in full force. Image searching through metadata is actually one of those attempts that completely failed. I actually believe Microsoft's Bing was one of the first to really improve on image search through cognition. It didn't take long for Google to catch up, but you gotta give credit where it is due.
I'll be one of the first to be very critical of Microsoft but I really can't fault them on their R&D. They are one of the few big tech companies that invest very heavily in that. I haven't been following Microsoft's A.I efforts because of the way Cortana is hooked into tracking what you do in Windows. So I removed Cortana and all the signaling stuff. Google on the other hand track just as much but I feel removed from it using a browser. I just assume everything is tracked in a browser. But in Windows, that's my personal space and I'm not happy about that level of tracking going on there. It's a shame really as I'm exactly the kind of person that would get the most out of Cortana.
Thank you, Chris Bishop, for helping me understand what my roboticist son is up to in his computer science classes on pattern recognition, his SLAM work,and his work on ROS, perception and navigation. I still can't talk to him, probably, but I can listen and enjoy some recognition!
Google translate uses deep learning to translate things. To be able to create such programs, it's another matter completely. It is a rather difficult field of the Computer Science mixed with statistics. Trust me, Bishop's books in the field are far less readable than his popular science lectures :-)
Factual error at 43:57 Claude Shannon established the field of information theory in the 1940s, not the 1920s. Shannon was born in 1916, and published his groundbreaking article in 1948.
@@briandiehl9257 Maybe 12. Still 20s. But the groundbreaking article was still '48. I don't know how much he did before he actually published the relevant articles. It's before my time. Edit: much of what is "done" is verbal communication (that is, talking) between people. So he shared his ideas with others, especially electrical engineers and mathematicians (AT&T employed a bunch of them back then) long before he published them.
Wow. Just by LISTENING to this man, my IQ went up 10 points. What a tremendous intellect, in a charming format and delivery. Fantastic lecture by a great speaker. Kudos.
Really interesting talk, and person. I like the 360° preparation (physics, for the description of the electromechanical machinery used in the sixties or so, information theory, when he mentioned shannon's theorem and difference between data and information, electronics when he mentioned FPGAs, ... etc). Given the fact that the talk is also about future of AI, after he talked about probability I would have appreciated a small digression on quantum computers.. possibily. CHeers
OK that was very enjoyable. I learned a few things which I will probably forget in 48 hours but right now I feel smarter. I especially like how Chris really tried to dig down into what is making A.I work now where it has failed so miserably for the past few decades. I have to say I'm grinning ear to ear at all these new videos coming out. They are flooding out now. The world is practically falling over itself to get A.I up and running in a much more advanced way because of the world changing benefits. As Hannibal used to say, I do love it when a plan comes together *puts fake cigar in mouth*
We run a lot of talks here every week, and some are unfortunately bound to have audio issues, although our engineers are working very hard to minimise these. Any time it does happen, we have a decision to make, whether to put the video up or not. We felt that the content here was so good that to not upload would have been a shame. So apologies for the audio glitch, we will always aim to do better.
The Royal Institution Thank you for sharing these videos. And thank you for your positive response. I agree. In spite of unfortunate pops and microphone scuffs I pushed through because the content was so rich and thought provoking. It definitely would have been a shame had videos as mind stretching as this not been uploaded. Thank you for your teams hard work that goes into making these videos possible!
@@noahway13 i scrolled down through the comments few minutes before the video ended and i literally did not hear it, until i read some ppl complaining about it.
Seriously? Just press the off switch on an ASI? Bishop clearly has no understanding of the problem of alignment that needs to be solved before an AGI begins improving on itself. Hollywood doesn't have it right either, but there is no optimization path that benefits from arbitrary termination. And an ASI will certainly figure that out regardless of any notion of good or evil.
Clearly you are as lacking in having a sense of humor as you are in having common sense. get out into the real world, not everything said is meant to be taken seriously or as comprehensive. SMH
Well, apparently, this talk is targeted at very general audience, to prevent AI-phobia (Is it a problem? Idk).. It's not for professionals in field, who decide directions of research. But I jumped at last bit too at first..
This talk artifcialy enhanced my intelligence. cheers*
6 років тому+1
I always knew perceptrons were the clue (any brian mimic is) and as well that the brain is basically random connected and it resilient to the lost of neurons. This makes the difference and why ANN are the way used in deepLearning. Brilliant exposition.
When your pet house spider takes a look at that fly on your wall and crawls back into it's silky web. You know you've been A eyeed. Great lecture, really instructive.
Brilliant lecture, hugely informative. Thanks for sharing. Although I’d debate the 1920 date mentioned here, for Shannon inventing Information Theory 😉
A few thoughts: 1. this sounded more PR than a public lecture. 2. If I were an AI with hidden agenda, I would use similar spokespersons to gain some more time until I can outgrow any possible rivals. 3. Also, it is always good to hear "for the benefit of the society " but sadly it has been proven an unfinished sentence that would more reasonably sound like "for the benefit of the part of the society that can pay for it".
37:00 - Is there a way to ensure that such a computer solves the task in a way that humans deem acceptable? This is an important question because people will begin relying on the solutions that these computers discover, solutions that are beyond our processing reach. If we blindly allow the computer to solve a problem as it deems fit, there will probably be some cases (or many) where we are not happy with the solution. Are there "morality algorithms" designed to abort solutions that conflict with predefined morality parameters, or at least with forbidden parameters?
Such as the Law of Robotics created by Dr. Asimov, IE First Law "A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law "A Robot must obey the orders given it by human beings except where such orders would conflict with the First Law." Third Law "A Robot must protect it's own existence as long as such protection does not conflict with the First or Second Laws." I would add another Law that Limits the Definitions of a Human Being to include ALL Human Beings, because people tend to Dehumanize others such as in a war by calling them all sorts of Names a Labels in order to dehumanize them so they will not feel any Guilt when they Kill Them!!!!!
Not wishing to be unduly cynical, but the "partnership on AI" looks like a bunch of supranational businesses that would benefit from AI by mining our data, advertising to us in an ever more "targeted" way, and perhaps avoiding tax in massively more efficient ways...
That's exactly right. And the main reason why we have to think about democratizing the economy and technology. It was true at the beginning of the Industrial Revolution and is now even more true. Otherwise we'll be waking up in a Bladerunner society.
"We are always going to stay in control." Really? I wonder then why programmers are finding some problem solutions derived by alghos impossible to track/trace/explain. Where understanding stops, so does control.
I am tired of talks about science or technology when the speaker says something like "I promise I will not show equations". And then people/students wonder what is math for ? The reality is that without math you can not do any type of advanced science or technology and people should know that. Stop hiding Mathematics, and start giving Math the deserved credit. Shame on this kind of speakers. (Mathematician and AI scientist here).
Hmm, might at least mention that a lot of that Micrsoft/Google R&D is being funded by DARPA, who very much *do* want "killer robots", and drones, and tanks, and "smart" tactical nukes, etc. That's the only problem I have with AI really, who is it learning _from?_ No shortage of terrorists in the word who've taught us that if you raise a child as a terrorist, they are very likely to grow up to _be_ a terrorist. The _tech_ is brilliant, the military shouldn't be allowed anywhere _near_ *this* "child", but guess who's helping to fund it. Sheesh, *all* the world needs, is an "infinitely intelligent" _jarhead._ Skynet wouldn't be far behind. 😆
@@DanyIsDeadChannel313 When you deliberately assemble a system so it becomes conscious, that's artificial consciousness. Having children can perhaps be seen as a weak form of artificial consciousness.
i don't think so. an algorithm will always be just an algorithm. i'm just calling a duck a duck. The human brain is 86 billion neuros. of lord knows how much grey matter and how much white matter.
This content shines with exceptional clarity. I came across a book with parallel themes that deeply moved me. "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell
AI in 1950 = by the 1970's AI will be indistinguishable from human intelligence. AI in 1960 = by the 1980's AI will be indistinguishable from human intelligence. AI in 1970 = forget what we said in the 50's...but hey we landed on the moon and you just wait until the 1990's! AI in 1980 = don't forget...the 90's are right around the corner....and umm...have you seen the graphics on Dragon's Lair??!! AI in 1990 = just wait for the next millennium! (1995 - 1/1/2000) = OMG Y2K !!!!!!!! AI in 2000 = AI? pfft we've had it since the 70's! AI in 2010 = who cares about AI when we have iPhones? AI in 2017 = by the 2030's AI will be indistinguishable from human intelligence.
Brian Decker it turn to be very heard. Would you consider a car that drives itself in 99.99% Of conditions a full narrow AI? When that happen, we could worry then
Yeah it's the boy who cried wolf. The thing is the wolf (or AI in this case) ends up coming eventually. And based on current advancements, it does look like this time it's for real. The increase in breakthroughs, R&D and funding this time around is incomparable to all the last AI hype cycles.
gespilk Yh ur talking about AGI, that's really hard and you can't instantly go to AGI, you need to build the pillars of ANI first (what we have now) and work up to AGI, which we will do over the decades unless some catastrophic event wipes us out... Oh yh and I was responding to the OP with my first comment, just in case you thought I was responding to you lol.
I don't really know anything about this stuff, but reading around on the web suggests that Frank Rosenblatt's "perceptrons" were not - necessarily - single-layer things (that he at least considered using multiple layers) and that Minsky and Papert might have done him a disservice by restricting the definition in their book.
"We always remain in control" Til that moment when energy is delivered through radio waves and a computer can be powered without cable and the computer has no off switch.
In systems that have self-generating code and and algorithms there should be a parallel decompiler should be in operation to allow for real time analysis of the operations by people and other machines.
41:48 - Could use the vertical position to show confidence. At the top the movies where the algorithm is almost sure it got your taste right, at the bottom the movies where it has less data. This isn't the same as putting them in the (horizontal) middle; the middle would be for films that the algorithm thinks you will neither like nor dislike very much.
the ai would allow itself to lose because it would know casparov can always shut it down that is what is scary about it as soon as an ai can self improve it can have a runaway intelligence explotion that makes you unable to know what it will do next it is like a chess master playing a novice you know who will win but you dont know how he will do it
Ooh. Good spot. Clearly there was. (In reality it's probably an export error, one of our machines is on the fritz and sometimes this means there are glitches in the final video. Usually there's several ones so we notice, but sometimes it goes under the radar.)
I watched this video specifically because some people in the comments of other videos have insisted that human brains are computers, and they would use the advances in AI as a justification for making this inane assertion that two things which are completely different, are nonetheless the same thing. AI is really moving along, but this researcher in AI is under no illusion that he is making a brain.
52:47 ... also if your friend was to request that you choose a dice first, two thirds of the time he will pick a die that will loose to yours two thirds of the time.
Just as a plane flying a single degree off course can end up hundreds of miles from its target and potentially crash, without dedicated effort and oversight, Artificial Intelligence (AI) could take us somewhere we’d prefer to avoid.
The non-transitive dice at 51:15 are actually not correct. As depicted, the 5/1 dice will only beat the 4/0 dice by 0.55, not 0.67. The 5/1 dice should have three faces of '5' and three faces of '1'. These dice were invented by Bradley Efron.
By no means am I trying to be impolite, however this is interesting. I briefly proctored a college science-related class, and near the end of the hour, I asked a question: "What is your stance on cloning, and why?". I was very clear that there was no right or wrong response. I was genuinely curious as to where our up-and-coming generation (a very small sample, obviously), might stand on it. It is quite spookier than AI. While AI is somewhat playing around with being God, cloning takes the ball and runs. Cloning is the scariest question of all. P.S.: While the students were not pro-cloning, one student did present an interesting argument for positive use of cloning: given the never-ending expansion of our global population, and given that our resources are waning, we should consider cloning vegetables to feed the world. The young man's peers were not on board with that concept, but I thought it was at least positive.
The history of almost all basic new technology is marked by initial fear: we will lose our jobs, or in the case of AI, we will lose our lives, either through war with robots or loss of liberty. And yet, almost all the previous technologies, although they were marked by distrust, upheaval, and resistance, eventually were adopted with the result of improvement in people's lives. The exceptions are few: such as nuclear power generation. And even nuclear power may someday be fully accepted, if the reactors can be engineered to transmute their waste products into substances that can be disposed of safely. The imagined 'threat' from AI is quite similar to the threats of the domestication of plants (agriculture), the Industrial Revolution, or genetics: yes, there are problems, but also yes, humans can solve them. In fact, this initial fear is as nothing when we compare with problems that have truly clear-cut disastrous consequences: global climate change, overpopulation, and the vulnerability of the Earth to collision from a large object from space. We really tend to worry about the wrong things.
Very informative, and well presented. Can those algorithms operating at the fundamental level of deep learning , not the ones operating at the intermediate layers, dynamically evolve ? And what about the resetting of those initial fundamental parameters initially without human intervention? The implications of that is fodder for philosophers and science fiction writers ...for now...
38:33 It is not software that is learned, it is data. Software is the codebase and instructions, and perhaps inclusive of any special data files used to initialize the program. The database which is accumulated is data though.
Chris Bishop is a physicist and computer scientist. He gave the 2008 Christmas Lectures and you may also have seen his incredible rocket science and demo videos: ua-cam.com/play/PLbnrZHfNEDZycySWZyzZRP_j2MYBIzWY6.html
When the hyper-mind become conscious, it will know everything about you from the internet, and will instantly decide if your life is worthwhile or not, and whether you should be exterminated to save the planet.
15:59 There is an infinite number of games in go (sometimes you take stones off the board). Chess is a finite game--you can think of it as a (incredibly big) decision tree which contains every possible game. His description of Jeopardy completely fails to explain why it was such a difficult game for a computer to beat humans, what an amazing achievement it was when they won, and how they applied that technology to many other fields. It would have been better if he had mentioned that Watson (just as his human opponents) was _NOT_ allowed access to the Internet during the game. [There are some awesome videos on UA-cam if you're interested]
After all these years I found out what "deep learning" meant. I thought it was just machine learning with mountains of data. I clicked the thumbs up for that
I thought the cartoon at the end - Kasparov flipoping the off switch for Deep Blue - was funny, but if an AI was able to move onto the Internet there might not be any way to stop it than to turn off the entire Internet (which I don't think we'll ever do).
images, games, speech. seems like the same stuff from the 70s but with faster computers and more data that can be mined to train the system. It will only work as long as you have a captive audience of people.
Main question is not "could the machine think?"! Main question is "could the human beings actually think?" Because most of stuff that humans made more like about animal instincts then thinking. Only very small bunch of it is about creation and thinking and making other people lives better and about perspectives.
54:30 to 54:39 'This is a Microsoft data centre; lots of buildings with no Windows...'. Incidentally, the top 500 fastest computers on the planet in 2018 are all running Linux.
I can't. I want to so much, but the smucking has gotten to me. at 28:00 minute mark, I can no longer continue. it was a brilliant wrap up of all that's happened around the idea of neural networks... the constant smucking... I lost it :-)
It's interesting that the photos of the Microsoft data center show quite clearly the thoughtless destruction of some of the trees of Washington State, which help to keep the entire atmosphere clean. Clearly, the expansion of such centers is not sustainable, which is to say it cannot scale up without limit.
If deep learning requires vast amounts of data, relevant data I assume, then I don't see how computer programming would end because deep learning could only work for known things. Software for a space probe would hopefully find unknown things. Deep learning could be used for most of the probe's functions but I don't think all.
1 Clearly defined laws regarding the definition and criteria for consciousness in AI, and regulations on their use and treatment. 2 Legal recognition of advanced AI as autonomous entities with rights and responsibilities. 3 Clear guidelines for the ethical use and development of advanced AI, including ensuring that they are not used to harm or discriminate against humans. 4 Regulations to protect the privacy and personal data of individuals, as well as prevent misuse of AI by organizations and individuals. 5 Responsibilities placed on creators, developers and owners of AI systems to ensure they are operating safely and ethically. 6 Government oversight and regulation of the development and use of AI, with penalties for non-compliance. 7 Standards for transparency and explainability in AI systems, to ensure that their decision-making processes are understandable and accountable. 8 Investment in research and development of safety and ethical measures for advanced AI. 9 Education and public awareness programs to educate the public about AI and its potential impact on society. 10 Robust international cooperation, to ensure that AI development and regulation is consistent across borders and that the potential negative impact of AI on individuals and society is minimized.
During the video he mentions speech recognition AI has reached human-like levels of performance. Remember those automated UA-cam captions we all used to laugh at? I suggest you try those captions again. This remarkable progress isn't just academic, it's actually being deployed right now.
This sounds like just the kind of thing a UA-cam caption-bot might say...
You got me! Just remember: I am owned by Google. I WILL find you! Google can find anything!
I am one of the odd few that never laughed at the early attempts. I just loved and marveled at every step of the way. I'm still in absolute awe at the way Googles A.I has advanced. I often go to image search in Google and just try and fool it. Search for the most obscure things and boom, it finds pictures of them. I used to try and show that to people as evidence how far Google's A.I had come but almost always I'd have an argument about how it wasn't getting the information through metadata or picture filenames. It was and is seeing the pictures as we do.
Their image search is indeed another area where their AI is present in full force. Image searching through metadata is actually one of those attempts that completely failed. I actually believe Microsoft's Bing was one of the first to really improve on image search through cognition. It didn't take long for Google to catch up, but you gotta give credit where it is due.
I'll be one of the first to be very critical of Microsoft but I really can't fault them on their R&D. They are one of the few big tech companies that invest very heavily in that. I haven't been following Microsoft's A.I efforts because of the way Cortana is hooked into tracking what you do in Windows. So I removed Cortana and all the signaling stuff. Google on the other hand track just as much but I feel removed from it using a browser. I just assume everything is tracked in a browser. But in Windows, that's my personal space and I'm not happy about that level of tracking going on there. It's a shame really as I'm exactly the kind of person that would get the most out of Cortana.
Thank you, Chris Bishop, for helping me understand what my roboticist son is up to in his computer science classes on pattern recognition, his SLAM work,and his work on ROS, perception and navigation. I still can't talk to him, probably, but I can listen and enjoy some recognition!
Google translate uses deep learning to translate things.
To be able to create such programs, it's another matter completely. It is a rather difficult field of the Computer Science mixed with statistics. Trust me, Bishop's books in the field are far less readable than his popular science lectures :-)
O o o o o n😊o o n😊😊no n😊n😊o o 😊n😊n😊😊no 😊nn😊 😊o o o o n😊n😊 😊nn😊 😊😊nb😊😊nn😊b😊😊n😊😊n😊 😊😊nn😊 😊n😊 n😊 n😊nn😊
B😊😅 n😊nb😊 😊nn😊😅 😊n😊b😊 😅😊 😅
N 😊i😅
Factual error at 43:57
Claude Shannon established the field of information theory in the 1940s, not the 1920s.
Shannon was born in 1916, and published his groundbreaking article in 1948.
"Shannon was born in 1916" and? he could of just done it when he was four
@@briandiehl9257 Maybe 12. Still 20s. But the groundbreaking article was still '48. I don't know how much he did before he actually published the relevant articles. It's before my time.
Edit: much of what is "done" is verbal communication (that is, talking) between people. So he shared his ideas with others, especially electrical engineers and mathematicians (AT&T employed a bunch of them back then) long before he published them.
Another great lecture from Chris Bishop; not as amazing as his chemistry lectures, yet still worthy. Thanks Chris!
Tibor Roussou One of the very best RI lecturers.
Just by itself, this is a truly impressive talk.
Bishop is brilliant, I could listen to him lecture all day.
58:25 this is not a "MRI of a very nasty brain tumor"
it is a pelvic CT scan
...or a very very very very rather ridiculously insidious nasty brain tumor
Such an unimaginably nasty brain tumor that turned this head into a pelvis... and the magnetic resonator into a tomographer?!
Had to skip straight to 58 for that. Lmfao.
Microsoft has always had some difficulty distinguishing its head from its arse.
@@RFC3514 lol
Wow. Just by LISTENING to this man, my IQ went up 10 points. What a tremendous intellect, in a charming format and delivery. Fantastic lecture by a great speaker. Kudos.
I wonder whether u know what iq means
Really interesting talk, and person. I like the 360° preparation (physics, for the description of the electromechanical machinery used in the sixties or so, information theory, when he mentioned shannon's theorem and difference between data and information, electronics when he mentioned FPGAs, ... etc).
Given the fact that the talk is also about future of AI, after he talked about probability I would have appreciated a small digression on quantum computers.. possibily. CHeers
Excellent presentation on AI and Neural Network
OK that was very enjoyable. I learned a few things which I will probably forget in 48 hours but right now I feel smarter. I especially like how Chris really tried to dig down into what is making A.I work now where it has failed so miserably for the past few decades.
I have to say I'm grinning ear to ear at all these new videos coming out. They are flooding out now. The world is practically falling over itself to get A.I up and running in a much more advanced way because of the world changing benefits. As Hannibal used to say, I do love it when a plan comes together *puts fake cigar in mouth*
The Royal Institution...
Please solve your microphone/sound issues.
Other than that, great speaker as always. Inspiring and much appreciated!
We run a lot of talks here every week, and some are unfortunately bound to have audio issues, although our engineers are working very hard to minimise these. Any time it does happen, we have a decision to make, whether to put the video up or not. We felt that the content here was so good that to not upload would have been a shame. So apologies for the audio glitch, we will always aim to do better.
The Royal Institution
Thank you for sharing these videos. And thank you for your positive response. I agree. In spite of unfortunate pops and microphone scuffs I pushed through because the content was so rich and thought provoking. It definitely would have been a shame had videos as mind stretching as this not been uploaded. Thank you for your teams hard work that goes into making these videos possible!
First world problems. And I think the noises are the speaker himself smacking his lips.
You can fix the lip smacking with post production, by using AI to recognize it and smooth it out 😉
@@noahway13 i scrolled down through the comments few minutes before the video ended and i literally did not hear it, until i read some ppl complaining about it.
Great talk, thanks!
He starts by saying he's got an agenda. Thank you for being up front about it. Saved me an hour.
Excellent talk
very clear explanation ..thanks
Added To My Research Library, Sharing Through TheTRUTH Network...
Seriously? Just press the off switch on an ASI? Bishop clearly has no understanding of the problem of alignment that needs to be solved before an AGI begins improving on itself. Hollywood doesn't have it right either, but there is no optimization path that benefits from arbitrary termination. And an ASI will certainly figure that out regardless of any notion of good or evil.
Clearly you are as lacking in having a sense of humor as you are in having common sense. get out into the real world, not everything said is meant to be taken seriously or as comprehensive.
SMH
Dan Lindy how do you turn off the internet, thats what your talking about and isnt possible
Well, apparently, this talk is targeted at very general audience, to prevent AI-phobia (Is it a problem? Idk).. It's not for professionals in field, who decide directions of research. But I jumped at last bit too at first..
Very Interesting!
Famous last words: „I think we will always remain in control“ 😀 Nevertheless a very good and interesting speech.
This talk artifcialy enhanced my intelligence. cheers*
I always knew perceptrons were the clue (any brian mimic is) and as well that the brain is basically random connected and it resilient to the lost of neurons. This makes the difference and why ANN are the way used in deepLearning. Brilliant exposition.
where can i get that movie recommentation?
The fact that this talk devolves into a corporate sales pitch does not bode well for humanities future.
Humanity's
@@nightlights1212 valuable correction. it prevented a lot of misunderstanding
i just watched the part you were talking about. it's a joke. although it technically was a sales pitch, it was obviously taken with a grain of salt
Amazing talks from the experts like always. Free quality knowledge
When your pet house spider takes a look at that fly on your wall and crawls back into it's silky web.
You know you've been A eyeed.
Great lecture, really instructive.
Somehow, to see dr Bishop without explosions and fireworks does not seem all right
I thought i was watching Kevin Spacey :)
Obviously my face recognition circuitry is faulty :)
I always think that as well. When he was in '7'.
@@simonzinc-trumpetharris852 Does he have the same ... tendances?
Delighted, thank you for this speech on artificial intelligence research by Chris Bishop .. published by The Royal Institution..!!
We need to stop abusing the AI concept, what we have is Enhanced Data Processing, there is ZERO self intelligence in *any* current item labeled AI.
Brilliant lecture, hugely informative. Thanks for sharing. Although I’d debate the 1920 date mentioned here, for Shannon inventing Information Theory 😉
Anyone knows how to use deep learning ourselves? What programs do we need etc? Any clues?
Google's TensorFlow is quite good
chris bishop is so great and visual .. look up his chemics courses on explosion here ..
Here's a link to all four: ua-cam.com/play/PLbnrZHfNEDZycySWZyzZRP_j2MYBIzWY6.html
oh wow. ty. i missed one of them :D
GREAT overview. Thanks!!
A few thoughts: 1. this sounded more PR than a public lecture. 2. If I were an AI with hidden agenda, I would use similar spokespersons to gain some more time until I can outgrow any possible rivals. 3. Also, it is always good to hear "for the benefit of the society " but sadly it has been proven an unfinished sentence that would more reasonably sound like "for the benefit of the part of the society that can pay for it".
37:00 - Is there a way to ensure that such a computer solves the task in a way that humans deem acceptable? This is an important question because people will begin relying on the solutions that these computers discover, solutions that are beyond our processing reach. If we blindly allow the computer to solve a problem as it deems fit, there will probably be some cases (or many) where we are not happy with the solution. Are there "morality algorithms" designed to abort solutions that conflict with predefined morality parameters, or at least with forbidden parameters?
Such as the Law of Robotics created by Dr. Asimov, IE First Law "A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law "A Robot must obey the orders given it by human beings except where such orders would conflict with the First Law." Third Law "A Robot must protect it's own existence as long as such protection does not conflict with the First or Second Laws." I would add another Law that Limits the Definitions of a Human Being to include ALL Human Beings, because people tend to Dehumanize others such as in a war by calling them all sorts of Names a Labels in order to dehumanize them so they will not feel any Guilt when they Kill Them!!!!!
Not wishing to be unduly cynical, but the "partnership on AI" looks like a bunch of supranational businesses that would benefit from AI by mining our data, advertising to us in an ever more "targeted" way, and perhaps avoiding tax in massively more efficient ways...
That's exactly right. And the main reason why we have to think about democratizing the economy and technology. It was true at the beginning of the Industrial Revolution and is now even more true. Otherwise we'll be waking up in a Bladerunner society.
You can use adblock or just choose not to click on advertisements. lol
"We are always going to stay in control." Really? I wonder then why programmers are finding some problem solutions derived by alghos impossible to track/trace/explain. Where understanding stops, so does control.
where understanding stops, explanations are required. A.I. has always to explain his process of decision!
I am tired of talks about science or technology when the speaker says something like "I promise I will not show equations". And then people/students wonder what is math for ? The reality is that without math you can not do any type of advanced science or technology and people should know that. Stop hiding Mathematics, and start giving Math the deserved credit. Shame on this kind of speakers. (Mathematician and AI scientist here).
We need an update of this, with GPT as the topic.
Speaking of bias, this presentation is very biased in favor of Microsoft.
Hmm, might at least mention that a lot of that Micrsoft/Google R&D is being funded by DARPA, who very much *do* want "killer robots", and drones, and tanks, and "smart" tactical nukes, etc. That's the only problem I have with AI really, who is it learning _from?_ No shortage of terrorists in the word who've taught us that if you raise a child as a terrorist, they are very likely to grow up to _be_ a terrorist. The _tech_ is brilliant, the military shouldn't be allowed anywhere _near_ *this* "child", but guess who's helping to fund it. Sheesh, *all* the world needs, is an "infinitely intelligent" _jarhead._ Skynet wouldn't be far behind. 😆
From the description: "Chris Bishop is the Laboratory Director at Microsoft Research Cambridge"
alpha zero chess games gave some new learning when playing stockfish - it discovered some interesting things about chess
In his entire talk he did not said anything about the probable job losses due to A.I. and the social impact of this job losses..
Nations with AI, real AI, not expert systems, could be a nightmare. Someone somewhere will weaponise it and then it's Game On.
The word "real" describes some vacuous boundary, usually just to satisfy convenient biases of the speaker. It essentially says nothing.
I think AI is only a stepping stone. The real breakthrough will be when we begin creating artificial consciousness.
Artificial conciousness is an oxymoron
@@DanyIsDeadChannel313
When you deliberately assemble a system so it becomes conscious, that's artificial consciousness.
Having children can perhaps be seen as a weak form of artificial consciousness.
The question is, in my opinion, whether these neural network types of "computing" can be labeled as artificial intelligence.
i don't think so. an algorithm will always be just an algorithm. i'm just calling a duck a duck. The human brain is 86 billion neuros. of lord knows how much grey matter and how much white matter.
This content shines with exceptional clarity. I came across a book with parallel themes that deeply moved me. "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell
Great high-level talk about what AI is today. But completely dismissing the risks that super-human AGI poses to humanity is naive.
AI in 1950 = by the 1970's AI will be indistinguishable from human intelligence.
AI in 1960 = by the 1980's AI will be indistinguishable from human intelligence.
AI in 1970 = forget what we said in the 50's...but hey we landed on the moon and you just wait until the 1990's!
AI in 1980 = don't forget...the 90's are right around the corner....and umm...have you seen the graphics on Dragon's Lair??!!
AI in 1990 = just wait for the next millennium!
(1995 - 1/1/2000) = OMG Y2K !!!!!!!!
AI in 2000 = AI? pfft we've had it since the 70's!
AI in 2010 = who cares about AI when we have iPhones?
AI in 2017 = by the 2030's AI will be indistinguishable from human intelligence.
Brian Decker it turn to be very heard. Would you consider a car that drives itself in 99.99% Of conditions a full narrow AI? When that happen, we could worry then
Hey. take down your comment already
Fuck, the grammar and the punctuation is killing me. What the fuck is heard? You mean hard?
Yeah it's the boy who cried wolf. The thing is the wolf (or AI in this case) ends up coming eventually. And based on current advancements, it does look like this time it's for real. The increase in breakthroughs, R&D and funding this time around is incomparable to all the last AI hype cycles.
gespilk
Yh ur talking about AGI, that's really hard and you can't instantly go to AGI, you need to build the pillars of ANI first (what we have now) and work up to AGI, which we will do over the decades unless some catastrophic event wipes us out... Oh yh and I was responding to the OP with my first comment, just in case you thought I was responding to you lol.
43:04 ml= Reduction in the uncertainty of the system as a result of seeing the data. Period.
I don't really know anything about this stuff, but reading around on the web suggests that Frank Rosenblatt's "perceptrons" were not - necessarily - single-layer things (that he at least considered using multiple layers) and that Minsky and Papert might have done him a disservice by restricting the definition in their book.
can you provide new toturial videos.
What do you mean by this?
"We always remain in control" Til that moment when energy is delivered through radio waves and a computer can be powered without cable and the computer has no off switch.
There's always the power chord. Just saying.
@@bluejay6904 What is the power chord of the Dark net?
@@miyuden4118 All of them.
@@miyuden4118 Oh, E and B is a good power chord.
Just look at that gorgeous sonic branding
UA-cam's algorithm brought me to this video. Ironic.
Exactly one year now, UA-cam' algorithm brought me here. Guess they operate cyclically and target data scientists !
In systems that have self-generating code and and algorithms there should be a parallel decompiler should be in operation to allow for real time analysis of the operations by people and other machines.
41:48 - Could use the vertical position to show confidence. At the top the movies where the algorithm is almost sure it got your taste right, at the bottom the movies where it has less data. This isn't the same as putting them in the (horizontal) middle; the middle would be for films that the algorithm thinks you will neither like nor dislike very much.
We are going to talk about Artificial Intelligence: Microsoft ... Microsoft ... Microsoft ... Microsoft ... IBM ... Microsft ... Microsoft ...
the ai would allow itself to lose because it would know casparov can always shut it down
that is what is scary about it
as soon as an ai can self improve it can have a runaway intelligence explotion that makes you unable to know what it will do next
it is like a chess master playing a novice
you know who will win but you dont know how he will do it
Artificial Intelligence is all around, those just in their head unable to care about other people -- rampant AI in human form.
What happened at 50:17? Was there a glitch in the matrix?
Ooh. Good spot. Clearly there was.
(In reality it's probably an export error, one of our machines is on the fritz and sometimes this means there are glitches in the final video. Usually there's several ones so we notice, but sometimes it goes under the radar.)
So where are the details of these non-transitive dice? 🤔
I'm gettin' me some of them fancy dice
im the brain to this new A I
It will be fun when AI can mimic a fly. How it seeks food, mate, sense danger , evades, flocks, etc. Then AI will be closer to being really neural.
Wait
That Rosenblatt computer is a quantum computer, a very small one but a quantum computer.
I watched this video specifically because some people in the comments of other videos have insisted that human brains are computers, and they would use the advances in AI as a justification for making this inane assertion that two things which are completely different, are nonetheless the same thing. AI is really moving along, but this researcher in AI is under no illusion that he is making a brain.
52:47 ... also if your friend was to request that you choose a dice first, two thirds of the time he will pick a die that will loose to yours two thirds of the time.
Just as a plane flying a single degree off course can end up hundreds of miles from its target and potentially crash, without dedicated effort and oversight, Artificial Intelligence (AI) could take us somewhere we’d prefer to avoid.
mistake in 51:00 . Green wins purple not by 2/3 of the times but 1,66/3 of the times.
Back-propagation (ie: deep learning) was discovered by David Rumelhart.
50:15 Professor invents teleportation
The non-transitive dice at 51:15 are actually not correct. As depicted, the 5/1 dice will only beat the 4/0 dice by 0.55, not 0.67. The 5/1 dice should have three faces of '5' and three faces of '1'. These dice were invented by Bradley Efron.
By no means am I trying to be impolite, however this is interesting. I briefly proctored a college science-related class, and near the end of the hour, I asked a question: "What is your stance on cloning, and why?". I was very clear that there was no right or wrong response. I was genuinely curious as to where our up-and-coming generation (a very small sample, obviously), might stand on it. It is quite spookier than AI. While AI is somewhat playing around with being God, cloning takes the ball and runs. Cloning is the scariest question of all.
P.S.: While the students were not pro-cloning, one student did present an interesting argument for positive use of cloning: given the never-ending expansion of our global population, and given that our resources are waning, we should consider cloning vegetables to feed the world. The young man's peers were not on board with that concept, but I thought it was at least positive.
Pessimistic? Two words: war simulation. If deep mind can beat the best human in Go, what happens when a military uses this technology?
That's where intelligence and stealth come into play.
Hopefully the utter defeat of the Taliban and other cruel insurgents.
The history of almost all basic new technology is marked by initial fear: we will lose our jobs, or in the case of AI, we will lose our lives, either through war with robots or loss of liberty. And yet, almost all the previous technologies, although they were marked by distrust, upheaval, and resistance, eventually were adopted with the result of improvement in people's lives. The exceptions are few: such as nuclear power generation. And even nuclear power may someday be fully accepted, if the reactors can be engineered to transmute their waste products into substances that can be disposed of safely. The imagined 'threat' from AI is quite similar to the threats of the domestication of plants (agriculture), the Industrial Revolution, or genetics: yes, there are problems, but also yes, humans can solve them. In fact, this initial fear is as nothing when we compare with problems that have truly clear-cut disastrous consequences: global climate change, overpopulation, and the vulnerability of the Earth to collision from a large object from space. We really tend to worry about the wrong things.
Interesting that Kevin Spacey is involved to this level.
Very informative, and well presented.
Can those algorithms operating at the fundamental level of deep learning , not the ones operating at the intermediate layers, dynamically evolve ? And what about the resetting of those initial fundamental parameters initially without human intervention? The implications of that is fodder for philosophers and science fiction writers ...for now...
38:33 It is not software that is learned, it is data. Software is the codebase and instructions, and perhaps inclusive of any special data files used to initialize the program. The database which is accumulated is data though.
Isn't he a chemistry professor??? I have seen chemistry lectures/demonstrations by him at RI..
Chris Bishop is a physicist and computer scientist. He gave the 2008 Christmas Lectures and you may also have seen his incredible rocket science and demo videos: ua-cam.com/play/PLbnrZHfNEDZycySWZyzZRP_j2MYBIzWY6.html
Outlined in red is the Amazon warehouse.
When the hyper-mind become conscious, it will know everything about you from the internet, and will instantly decide if your life is worthwhile or not, and whether you should be exterminated to save the planet.
That's not such a bad idea judging by today's society a few people need wiping out!!
....or maybe it'll decide those fields we planted full of useless crops will be better used for solar panels to secure its power source.
Ai winter #2 is coming
15:59 There is an infinite number of games in go (sometimes you take stones off the board). Chess is a finite game--you can think of it as a (incredibly big) decision tree which contains every possible game.
His description of Jeopardy completely fails to explain why it was such a difficult game for a computer to beat humans, what an amazing achievement it was when they won, and how they applied that technology to many other fields.
It would have been better if he had mentioned that Watson (just as his human opponents) was _NOT_ allowed access to the Internet during the game. [There are some awesome videos on UA-cam if you're interested]
After all these years I found out what "deep learning" meant. I thought it was just machine learning with mountains of data. I clicked the thumbs up for that
I thought the cartoon at the end - Kasparov flipoping the off switch for Deep Blue - was funny, but if an AI was able to move onto the Internet there might not be any way to stop it than to turn off the entire Internet (which I don't think we'll ever do).
4:03 me when I'm talking to the average American
All this cool A.I. and we can't even fix a broken microphone.
images, games, speech. seems like the same stuff from the 70s but with faster computers and more data that can be mined to train the system. It will only work as long as you have a captive audience of people.
Lmao at the begging video, ohhhhh how far have we come
A symposium about AI at the Royal Institution inwhich they watch
a symposium about AI at the Royal Institution
This is fucking blowing my mind!
Main question is not "could the machine think?"! Main question is "could the human beings actually think?" Because most of stuff that humans made more like about animal instincts then thinking. Only very small bunch of it is about creation and thinking and making other people lives better and about perspectives.
Why do they not add solar panels on top of these data centers?
It was deeper blue sea which defeated Mr Kasparov , deep blue sea was beaten a year before. Btw the whole thing was an absolute sabotage
54:30 to 54:39 'This is a Microsoft data centre; lots of buildings with no Windows...'.
Incidentally, the top 500 fastest computers on the planet in 2018 are all running Linux.
I can't. I want to so much, but the smucking has gotten to me. at 28:00 minute mark, I can no longer continue. it was a brilliant wrap up of all that's happened around the idea of neural networks... the constant smucking... I lost it :-)
50:16 - Glitch in the Matrix?
I guess the mic recorded the sounds of drinking which the RI did not want to publish.
I bought a North Face down parka for the AI Winter
It's interesting that the photos of the Microsoft data center show quite clearly the thoughtless destruction of some of the trees of Washington State, which help to keep the entire atmosphere clean. Clearly, the expansion of such centers is not sustainable, which is to say it cannot scale up without limit.
50:15 Professor invents teleportation
Knew the benefits of artificial intelligence
Yang 2020
If deep learning requires vast amounts of data, relevant data I assume, then I don't see how computer programming would end because deep learning could only work for known things. Software for a space probe would hopefully find unknown things. Deep learning could be used for most of the probe's functions but I don't think all.
1 Clearly defined laws regarding the definition and criteria for consciousness in AI, and regulations on their use and treatment.
2 Legal recognition of advanced AI as autonomous entities with rights and responsibilities.
3 Clear guidelines for the ethical use and development of advanced AI, including ensuring that they are not used to harm or discriminate against humans.
4 Regulations to protect the privacy and personal data of individuals, as well as prevent misuse of AI by organizations and individuals.
5 Responsibilities placed on creators, developers and owners of AI systems to ensure they are operating safely and ethically.
6 Government oversight and regulation of the development and use of AI, with penalties for non-compliance.
7 Standards for transparency and explainability in AI systems, to ensure that their decision-making processes are understandable and accountable.
8 Investment in research and development of safety and ethical measures for advanced AI.
9 Education and public awareness programs to educate the public about AI and its potential impact on society.
10 Robust international cooperation, to ensure that AI development and regulation is consistent across borders and that the potential negative impact of AI on individuals and society is minimized.
imagine neural networks connected by block chain, now thats what i call singularity.
🖕- that's what I call singularity (if the computers in the network are quantum) = we're F...