*DeepMind x UCL | Deep Learning Lectures | 12/12 | Responsible Innovation* *My takeaways:* *1. Overview **0:16* *2. Motivation **0:50* 2.1 Risk 2:25 2.2 What are our responsibilities? 6:25 *3. Specification driven ML **7:25* *4. Building adversarially robust networks **9:36* 4.1 Adversarial training 12:28 4.2 Adversarial evaluation: finding the worst case 16:00 4.3 Gradient Obfuscation 24:58 4.4 Verification algorithm 27:20 4.5 Other specifications 33:30 *5. Ethics and technology **34:49* 5.1 Ethical training data 36:52 5.2 Algorithmic bias 38:31 5.3 Power and responsibility 40:13 5.4 Science and value 41:28 5.5 Responsible innovation 42:21 *6. Principles and processes **42:21* 6.1 Principles 43:38 6.2 A five-step process 46:10 6.3 Two final tests 56:18 *7. The path ahead **58:30*
I'd never heard of gradient obfuscation before. Optimizers are challenging to reason about!
4 роки тому+7
This lecture taught me surprisingly little, after already having watched previous DeepMind videos. The first half basically says "If your AI is bad at what's it doing, then it can't do good reliably.", which is just common sense, and then explains the generator+discriminator model as one of the many improvements to AI that was already explained in an older video (a bit better in my opinion) and that has basically nothing to do with ethics. The second half basically said "behave morally, please" and explained something that's basically just utilitarianism.
yeah basically I don't think they'd have any idea how to control one of these neural networks if it started misbehaving, other than throwing out all the training data and starting again, hoping it doesn't come up this time? I mean it's not a problem with AI's that play starcraft, but what if we had a similar AI that ran automated crop harvesters?
4 роки тому+3
Why do none of the replies here have anything to do with what I said? If you want to comment on the video directly, please do that.
"The second half basically said "behave morally, please" and explained something that's basically just utilitarianism." But that's actually a very interesting point. In the end, there is not much more than hoping that researchers do not make immature iterations accessible. Imitating voices is a good example, where he also names solutions or at least improvements. Ultimately, it is probably unavoidable that sooner or later someone will no longer adhere to the "behave morally, please". No law, no rule will help here. But we can limit circulation if the power houses stick to seemingly basic rules. Let's also not forget that he only had 25 minutes available. Kinda hard to go beyond the basics.
Just an idea my friend keeps talking about;: wouldn't it be easier and more commercially/personally useful to begin with botanical flora - to use AI and big data to create a camera app that comprehensively identifies a. comprehensive nomenclature b. the parts of the plant (roots, seeds, leaves, barks) c. ecological best practices, geographical and optimal agricultural growing conditions,d. known medicinal/nutritional uses e.molecular breakdown etc.. and then store the info in a botanical cloud based blockchain to ensure redundancy and integrity.
i would like to see AI that can mix music by using compressors and EQing balancing the music and so on ... as music mixing is a matter of human taste to our hearing sense
Nice video as always . I have a question though . I trying to buy a laptop for cad as well as programming . I only just moved into the world of computer vision /machine learning and was wondering if hp that is supposedly coming later this month with Ryzen 4800h and 2060 will be an intelligent buy. I read that Ryzen doesn’t support avx512 instructions etc ...can you throw more light on this for me ? Thank you
*DeepMind x UCL | Deep Learning Lectures | 12/12 | Responsible Innovation*
*My takeaways:*
*1. Overview **0:16*
*2. Motivation **0:50*
2.1 Risk 2:25
2.2 What are our responsibilities? 6:25
*3. Specification driven ML **7:25*
*4. Building adversarially robust networks **9:36*
4.1 Adversarial training 12:28
4.2 Adversarial evaluation: finding the worst case 16:00
4.3 Gradient Obfuscation 24:58
4.4 Verification algorithm 27:20
4.5 Other specifications 33:30
*5. Ethics and technology **34:49*
5.1 Ethical training data 36:52
5.2 Algorithmic bias 38:31
5.3 Power and responsibility 40:13
5.4 Science and value 41:28
5.5 Responsible innovation 42:21
*6. Principles and processes **42:21*
6.1 Principles 43:38
6.2 A five-step process 46:10
6.3 Two final tests 56:18
*7. The path ahead **58:30*
Thank you for these summaries
@@neurophilosophers994 You are welcome!
ua-cam.com/video/r_Q12UIfMlE/v-deo.html
Awesome lecture, thanks! The adversarial evaluation part was specially enlightening :)
Great lecture and big thanks to DeepMind for sharing this great content.
I'd never heard of gradient obfuscation before. Optimizers are challenging to reason about!
This lecture taught me surprisingly little, after already having watched previous DeepMind videos. The first half basically says "If your AI is bad at what's it doing, then it can't do good reliably.", which is just common sense, and then explains the generator+discriminator model as one of the many improvements to AI that was already explained in an older video (a bit better in my opinion) and that has basically nothing to do with ethics. The second half basically said "behave morally, please" and explained something that's basically just utilitarianism.
This shite is all about mind control ai and enslaving the population.
yeah basically I don't think they'd have any idea how to control one of these neural networks if it started misbehaving, other than throwing out all the training data and starting again, hoping it doesn't come up this time? I mean it's not a problem with AI's that play starcraft, but what if we had a similar AI that ran automated crop harvesters?
Why do none of the replies here have anything to do with what I said? If you want to comment on the video directly, please do that.
@ lol bald
"The second half basically said "behave morally, please" and explained something that's basically just utilitarianism."
But that's actually a very interesting point. In the end, there is not much more than hoping that researchers do not make immature iterations accessible. Imitating voices is a good example, where he also names solutions or at least improvements. Ultimately, it is probably unavoidable that sooner or later someone will no longer adhere to the "behave morally, please". No law, no rule will help here. But we can limit circulation if the power houses stick to seemingly basic rules.
Let's also not forget that he only had 25 minutes available. Kinda hard to go beyond the basics.
Is there a lecture on Graph Neural Networks(GNN)?
Just an idea my friend keeps talking about;: wouldn't it be easier and more commercially/personally useful to begin with botanical flora - to use AI and big data to create a camera app that comprehensively identifies a. comprehensive nomenclature b. the parts of the plant (roots, seeds, leaves, barks) c. ecological best practices, geographical and optimal agricultural growing conditions,d. known medicinal/nutritional uses e.molecular breakdown etc.. and then store the info in a botanical cloud based blockchain to ensure redundancy and integrity.
The fact that this company doesn't have a monthly demo release cycle is deeply troubling to me. :|
i would like to see AI that can mix music by using compressors and EQing balancing the music and so on ... as music mixing is a matter of human taste to our hearing sense
Alphazero vs updated Stockfish whose with me?...
Bring it to more games like Warframe.
Nice video as always . I have a question though . I trying to buy a laptop for cad as well as programming . I only just moved into the world of computer vision /machine learning and was wondering if hp that is supposedly coming later this month with Ryzen 4800h and 2060 will be an intelligent buy. I read that Ryzen doesn’t support avx512 instructions etc ...can you throw more light on this for me ? Thank you
We need Silicon Valley to tell us more about ethics 😂
DeepMind is in London
They have none, no ethics no morals, be they in SV or London, I don't need them to tell me about ethics
@@jamesle4330 It's owned by google
ua-cam.com/video/r_Q12UIfMlE/v-deo.html