'How neural networks learn' - Part II: Adversarial Examples

Поділитися
Вставка
  • Опубліковано 10 січ 2018
  • In this episode we dive into the world of adversarial examples: images specifically engineered to fool neural networks into making completely wrong decisions!
    Link to the first part of this series: • 'How neural networks l...
    If you want to support this channel, here is my patreon link:
    / arxivinsights --- You are amazing!! ;)
    If you have questions you would like to discuss with me personally, you can book a 1-on-1 video call through Pensight: pensight.com/x/xander-steenbr...

КОМЕНТАРІ • 93

  • @nimasanjabi626
    @nimasanjabi626 5 років тому +14

    This is a super-expert view on Neural Networks. It's discussing over fooling (or in other words hacking) NNs, while the NNs themselves are still highly complicated structures for most people and even experts. precious content I say.

  • @nishparadox
    @nishparadox 6 років тому +62

    Just discovered this channel. Awesome content. Better than most of the hyped channels around regarding ML and Deep Learning. Cheers. :)

    • @dheten4462
      @dheten4462 5 років тому +1

      True..

    • @JonesDTaylor
      @JonesDTaylor 4 роки тому +1

      He stopped maintaining this channel I am afraid :(

    • @revimfadli4666
      @revimfadli4666 4 роки тому

      @@JonesDTaylor nooo, why?

    • @joshuadennis9400
      @joshuadennis9400 2 роки тому

      I guess im asking the wrong place but does someone know of a way to get back into an instagram account..?
      I was dumb lost my password. I would love any help you can offer me.

    • @denniswaylon6744
      @denniswaylon6744 2 роки тому

      @Joshua Dennis Instablaster =)

  • @TopGunMan
    @TopGunMan 5 років тому +5

    All of your videos are incredibly well made and presented. Perfect mix of detail and generality and excellent visuals. So hooked.

  • @Niteshmc
    @Niteshmc 5 років тому +1

    This video deserves so much more attention! Great Job!

  • @iiternalfire
    @iiternalfire 5 років тому +19

    Great content. One suggestion : Can you please provide links of the shown research in the video description?

  • @inkwhir
    @inkwhir 6 років тому

    Wow, your videos are fantastic! The format is great, the content is awesome... Please post more videos :D

  • @kareldumon808
    @kareldumon808 6 років тому

    Nice! Didn't know yet about cross-model generalization. Also nice to have your take on how to avoid and even exploit these attacks.
    Keep up the video-making & posting :-)

  • @bjbodner3097
    @bjbodner3097 6 років тому

    Super cool video! Love the depth of the content! Please keep making more videos:)

  • @maximgospodinko
    @maximgospodinko 6 років тому

    You deserve much more subscribers. Keep up a good work

  • @hackercop
    @hackercop 2 роки тому

    This was absolutely fascinating. Have liked and subscribed!

  • @pial2461
    @pial2461 5 років тому

    Your vids are just gold.Please do more videos on other nets like rnn,cnn,ladder net,DBN,se2seq etc.I think you can make people understand better that anyone.Best of luck.Really a big fan of your contents

  • @ArturVegas
    @ArturVegas 6 років тому

    great work! keep developing your great channel! 💎

  • @MithiSevilla
    @MithiSevilla 5 років тому

    Thanks for making this video. I hope you also link to your references for this video in the description box like you did in part one. Thanks, again!

  • @haroldsu1696
    @haroldsu1696 6 років тому

    very good lectures, and thank you!

  • @drdeath2667
    @drdeath2667 5 років тому

    Great Job man and thanks a lot for this awesome content

  • @wvg.
    @wvg. 6 років тому

    Keep making videos, great job!

  • @hikingnerd5470
    @hikingnerd5470 6 років тому +3

    Great video! One suggestion is to include links to all relevant papers.

  • @abdoumerabet9874
    @abdoumerabet9874 5 років тому +1

    your explication is awesome keep going

  • @ScottJFox
    @ScottJFox 5 років тому +1

    Just subscribed for part III! :D

  • @jc-wh9mq
    @jc-wh9mq 3 роки тому

    love your videos, keep it up.

  • @dacrampus2656
    @dacrampus2656 3 роки тому

    Really great videos thanks!

  • @xianxuhou4012
    @xianxuhou4012 5 років тому +4

    Thanks for posting the awesome video. Could you please provide the reference (paper) at 14:12?

  • @mjayanthvarma6125
    @mjayanthvarma6125 5 років тому

    Bro, would love to see more and more content coming on your channel

  • @MartinLichtblau
    @MartinLichtblau 6 років тому

    They are a blessing! They show that NN's are getting something fundamentally wrong. And we can gain insight from those wrong classifications to understand what's really going wrong.
    I think, we humans don't use any chroma or HUE information for object detection - we only use detect structural patterns (at first).

  • @nikab1852
    @nikab1852 4 роки тому

    Great videos! what are your sources for this video? trying to find the ostrich/temple confusion in a paper!

  • @dufferinmall2250
    @dufferinmall2250 6 років тому

    dude that was awesome. THANKS

  • @CodeEmporium
    @CodeEmporium 6 років тому

    This is really good content. Subscribed! So I'm a Machine Learning guy as well (I make similar videos on my channel) but I don't have a decent face cam. What camera do you use?

    • @ArxivInsights
      @ArxivInsights  6 років тому

      Thx!! I use my GoPro Hero 5 for filming and a clip on mic for audio which I sync afterwards while editing! Also bought a green screen + studio lights rig from Amazon :p

  • @igorcherepanov4765
    @igorcherepanov4765 6 років тому

    Regarding the optimization of the car frame, consequently we end up with an adversarial example, as you said. Can you give some papers aimed at this subject? Thanks

  • @williamjames6842
    @williamjames6842 5 років тому

    9:48 that comment was pretty deft. The neural network's sarcasm is showing. I'd give that comment a positive rating too.

  • @srgsrg762
    @srgsrg762 Рік тому

    Amazing content. keep it up

  • @SantoshGupta-jn1wn
    @SantoshGupta-jn1wn 6 років тому

    Great vid!

  • @tarsmorel9898
    @tarsmorel9898 6 років тому

    Awesome! Keep them coming ;-)

  • @DontbtmeplaysGo
    @DontbtmeplaysGo 6 років тому +2

    Thanks a lot for your videos. At the beginning of this series, you announced a third part on "Memorization vs Generalization", but I couldn't find it even though you posted other videos after that. Was it deleted for some reason, or is it still a work in progress?

    • @ArxivInsights
      @ArxivInsights  6 років тому +2

      Dontbtme Haven't made part III yet, it's coming though.. some day :p Gonna finish the series on Reinforcement Learning first I think :)

    • @DontbtmeplaysGo
      @DontbtmeplaysGo 6 років тому

      Good to know! Thanks for your reply and all your hard work! :)

  • @sushil-bharati
    @sushil-bharati 4 роки тому

    @Arxiv Insights - The paper you showed does not claim that the sunglasses fail 'every' facial recognition systems. In fact, they are personally curated and would work for a particular recognition neural net.

  • @binaryfallout
    @binaryfallout 6 років тому

    This is so cool!

  • @chizzlemo3094
    @chizzlemo3094 3 роки тому

    this was so cool. MASSIVE SUBSCRIBE!

  • @doviende
    @doviende 6 років тому +1

    Great content.
    I'd like if the sound quality were a bit better, particularly due to the echos in your room. It sounds like you have some bare walls that are really reflective. Without changing your mic setup, you might be able to do something like hang up some towels to absorb the echoed sounds.

    • @ArxivInsights
      @ArxivInsights  6 років тому

      I know, there's a ton of echo in the room I'm filming, need to find a fix for that! Thought the clip on mic would help, but it's still not ideal, I know :p

  • @AlvaroGomezGrowth
    @AlvaroGomezGrowth 5 років тому

    SUPER GOOD CHANNEL. I have never seen ML Videos for learning so good. Thanks you really much. ¿One Question? Can we defense from an adversarial attack for image recognition, making one first step that would be simplifying the input image (Ex: Panda) to a vectoriced version, smooth edges and less colors, just for cheking the form? The noise will desapear in this process but the fundamentals of the image will continue the same. Then You can compare the result of the normal input and the simplified input.

  • @septimusseverus252
    @septimusseverus252 3 роки тому

    This video is amazing

  • @abdellahsellam912
    @abdellahsellam912 5 років тому

    A great video

  • @yajieli8933
    @yajieli8933 6 років тому

    Very good videos! What is the frequency of uploading new vids?

    • @ArxivInsights
      @ArxivInsights  6 років тому

      Yajie LI well I don't always have a lot of spare time besides work etc.. so I try to do something like one vid every three weeks :p Would love to do more, but currently that's kinda hard :)

  • @astrofpv3631
    @astrofpv3631 6 років тому +4

    Dude nice vid, Do you have an education background in AI?

    • @ArxivInsights
      @ArxivInsights  6 років тому +12

      Well I studied engineering, so got the mathematical background from there, than took one course + a thesis in Machine Learning at University, everything else is self-learned (online MOOCS, blogs, papers, ...)

  • @Kram1032
    @Kram1032 5 років тому +1

    Ok so, "simple" solution? (I'm sure actually implementing this is a different story)
    GAN-like network, but it takes the input and does all kinds of transforms to it (noise, rotation, scaling, changing single letters / words, what have you), optimizing for minimal necessary transforms to get the network to classify any images as any given class.
    Basically the network is supposed to learn to generate and then overcome its own failure cases.
    "Just" protect against all kinds of sources of adversarial examples. (Because obviously it's super easy to know that you've covered all your bases and that you haven't overlooked any problem *cough)*
    Would that work?
    Perhaps something like IMPALA could be used to make it work on multiple possible variants of breaking a network at once?

  • @yusun5722
    @yusun5722 3 роки тому

    Great video. Perhaps humans are robust to the adversarial examples of computers, and vice versa. In the end is how to align both adversarial distributions.

  • @infoman6500
    @infoman6500 5 місяців тому

    Glad to see that human biological computer network is still much efficient than machine with artificial neural network.

  • @threeMetreJim
    @threeMetreJim 5 років тому

    Some image pre-processing might help with stopping imperceptible pixel changes giving wrong results. If you extract the detail from the image, then blur the image to spread out any noise, before adding the detail back in, you'll at least have the areas that are large and without much variation, mostly noise free (detail mask --> blur detail mask --> use it to mask the original image to get the detail). Recognition would probably be best done in 2 steps, use a grey scale image (just the detail?) for one channel, and the colours from the same, but heavily blurred image, for a second channel. Most objects can be identified in black and white, colour only adds a smaller amount of information (like whether an animal/insect is likely poisonous). Using a reduced resolution for the colour may also be of help; there's little point in letting an neural network be able to distinguish between each of 16.7 million colours for object recognition; less to learn and less opportunity for small variations (that a human couldn't even see) causing upsets.
    Is that really Keanu Reeves? Looks more like Sylvestor Stallone :->

  • @wiiiiktor
    @wiiiiktor 6 років тому

    maybe you make a video on how to follow the white papers in the field of AI (where to find them, how to create alerts, if there are any such tools, are there good websites that follow new published papers, etc) - i don't know if this is a good topic, but just an idea. greets! :)

  • @substance1
    @substance1 4 роки тому

    Humans also have adversarial examples in the form of Pareidolia, seeing faces in inanimate objects. It's an evolutionary thing that helps humans detect predators. An example are people who scour the images taken on Mars, and they see rocks that they claim are the heads of broken statues, when it's really just a rock that was photographed at a particular angle.

  • @absolute___zero
    @absolute___zero 4 роки тому

    This just proves a few points:
    1) The problems of overfitting and inability of deep nets to converge is not due to the complexity of the network, but due to missing *Complenent of a Set* ( en.wikipedia.org/wiki/Complement_(set_theory) ) of training data. When you train a network you have to provde the *NOTs* of train/test data too, these would be pictures of mamals , birds, humans, not just digits with a black background if we speak about MNIST dataset. It is like you only believe in matter and forget about the dark matter of the Universe. Well, some day , because you didn't consider the whole Universe this dark matter is gonna eat your planet.
    2) The adversarial examples are not adversarial examples really, they are just pointing out on an inconsistent training method we are using currently. We need to modify our methods to include the *Complement of a Set* . This will increase training time by orders of magnitude, but you are going to get real generalization, just like it was originally conceptualized in the mid of last century.

  • @hfkssadfrew
    @hfkssadfrew 5 років тому

    Does adverisal attack work for regression?

    • @ArxivInsights
      @ArxivInsights  5 років тому

      Yes! Adversarial examples have been shown to exist for many different types of ML algorithms. There is a great talk by Ian Goodfellow on UA-cam where he dives into this, can't find the title right now though..

    • @hfkssadfrew
      @hfkssadfrew 5 років тому

      Arxiv Insights this is great. Intuitively, adversial example should work the best or to say most significant, for black box fitting model with: 1. very very high dimensional input. 2. Classification with soft max. The reason I think is due to we lack enough data to cover the whole extremely high dimensional vicinity volume near our training data. If the case is in 1 or 2 dimensional, We can graphically find where the adverisal example should be. But shouldn’t be that similar to original input. For regression, I think just do SVD on the Jacobian of the neural network w.r.t input, the first direction should be the optimal adversial in the vicinity limit. But optimization allow the optimal adversial example to go far away. So i think it should be very interesting.

  • @geraldkenneth119
    @geraldkenneth119 2 роки тому

    Things like this have made me appreciate the “artificial” in artificial intelligence, as it shows that the way AI works is very different than that of naturally evolved organisms, for better and for worse

  • @chrismorris5241
    @chrismorris5241 5 років тому

    9:16 column 7 row 3 predicts Sylvester Stallone

  • @christinealderson7357
    @christinealderson7357 6 років тому +3

    Where is part ||| ?

    • @ArxivInsights
      @ArxivInsights  6 років тому +1

      christine alderson Still need to make that one :p But it's coming! Someday.. ;)

    • @christinealderson7357
      @christinealderson7357 6 років тому

      thanks, i look forward to more great vids in the future

  • @vegnagunL
    @vegnagunL 3 роки тому

    If we look to a NN as a program (A huge function with many inputs) it becomes clear why AE works.

  • @obadajabassini3552
    @obadajabassini3552 6 років тому

    There is a really cool paper about using adversarial attack on humans, do you think that being fooled by an adversarial example is a fundamental property in our visual system?

    • @ArxivInsights
      @ArxivInsights  6 років тому +1

      Obada Jabassini Well, our visual system evolved through evolutionary selection in the natural world. It only had to provide usefull features in the physical world that surrounded us. I think it's very likely that any complex system will inevitably show signs of adversarial vulnerabilities outside of it's evolved domain, including our own visual system. With the advent of digital technologies and machine learning I think it's quite likely we can discover a whole range of 'optical illusions' and other kinds of adversarial tricks our brains did not evolve to manage. Super interesting to think about the relation between subjective perception and objective 'reality' (if such a thing exists) in the context of adversarial examples.. If you're really interested I suggest you check out Donald Hoffman's Ted talk, super interesting stuff!

  • @Waferdicing
    @Waferdicing Рік тому

    😎

  • @gorgolyt
    @gorgolyt 3 роки тому

    How do we know that classification adverserial examples don't exist for the human brain?

  • @ultraderek
    @ultraderek 5 років тому

    The problem is over generalization.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 5 років тому

    Wow, the fact that adversarial examples exist, doesn’t that indicate that our algorithm hasn’t extrapolated the essence of what makes an image such an image. And that the models, despite appearances of working most of the times are really not really working.

    • @ArxivInsights
      @ArxivInsights  5 років тому +1

      Yes I would agree. It basically points out that they fail to fully exploit all the structure in natural images and instead are such general learning architectures that locations just outside the natural data manifold (eg adversarial images) can trigger arbitrary responses from the network since there was never any guided gradient descent in those regions.

  • @joshuascholar3220
    @joshuascholar3220 3 роки тому

    What marketing departments and political consultants and propagandists do is generate adversarial examples for human cognition and emotion.

  • @vijayabhaskarj3095
    @vijayabhaskarj3095 6 років тому

    If we trained a GAN, throw away the Generator, take only the discriminator, will it be more robust to adversarial attack than normal image classifiers?

    • @ArxivInsights
      @ArxivInsights  6 років тому +1

      Vijayabhaskar J Well, the discriminator itself can only perform binary classification (Real/Fake) so you can't just use it as an image classifier. But what people have done is train a discriminator to distinguish between normal / adversarial images and put that network in front of a normal classifier. So if an adversarial image is sent to the API, it simply gets rejected by the discriminator.

    • @vijayabhaskarj3095
      @vijayabhaskarj3095 6 років тому

      Thank you for the reply,
      What if the discriminator is modified to tell not only the image given by the generator is fake or not, but also classify the image? something like output array length is 1001, where the array[0] tells the image is fake or not and the array[1:] are the probabilities of the classes? While training misclassification losses are only added if array[0]==0 (0 means Original image,1 means Generated image)

  • @revimfadli4666
    @revimfadli4666 4 роки тому

    Is this the machine learning equivalent of pareidolia?

  • @monstercolorfunco4391
    @monstercolorfunco4391 4 роки тому

    Neural Networks are not using proper logic for colors, uniformity, they are perhaps 1000 times simpler than brain ones, so they can be tricked using simple stuff which doesnt even contain color and uniformity. things will come along though. It's amazing that they are efficient already and it shows that tensor flow version 17.0 will be very awesome, even if it does require 0.01nm procesors!

  • @seijurouhiko
    @seijurouhiko 5 років тому

    Kenu Reeves or Sylvester Stallone?

  • @pooorman-diy1104
    @pooorman-diy1104 4 роки тому

    bikini pics are adversarial examples which flip human's (males category) attention focus ......

  • @aronhighgrove4100
    @aronhighgrove4100 3 роки тому

    Good presentation, but you are a bit too much in the foreground and moving a lot, which is highly distracting.