Generative Adversarial Networks (GANs) - Computerphile

Поділитися
Вставка
  • Опубліковано 14 тра 2024
  • Artificial Intelligence where neural nets play against each other and improve enough to generate something new. Rob Miles explains GANs
    One of the papers Rob referenced: bit.ly/C_GANs
    More from Rob Miles: bit.ly/Rob_Miles_UA-cam
    / computerphile
    / computer_phile
    This video was filmed and edited by Sean Riley.
    Computer Science at the University of Nottingham: bit.ly/nottscomputer
    Computerphile is a sister project to Brady Haran's Numberphile. More at www.bradyharan.com

КОМЕНТАРІ • 698

  • @Nalianna
    @Nalianna 6 років тому +1083

    This gentleman explains high level concepts in ways that the layman can understand, AND has an interesting voice to listen to. A++ work

    • @AlexiLaiho227
      @AlexiLaiho227 5 років тому +19

      you should check out, he made his own youtube channel. search for "Robert Miles AI"

    • @savagenovelist2983
      @savagenovelist2983 4 роки тому +1

      299 likes, here we go.

    • @giveusascream
      @giveusascream 3 роки тому +2

      And mutton chops that I can only dream off

    • @blackcorp0001
      @blackcorp0001 3 роки тому +1

      Brain work ... like House work...but deeper

    • @ev6558
      @ev6558 2 роки тому +9

      I like that they don't feel the need to do a camera cut every time he pauses to think of his next word. Makes me feel like the video was made for people who are actually interested and not just clickbait for zoomers.

  • @vincentpeschar
    @vincentpeschar 6 років тому +2515

    "Neural networks don't have feelings, yet...."

    • @RafidW9
      @RafidW9 6 років тому +37

      Vincent Peschar this is why the AGI will fight back. we abuse them so much lol.

    • @TechyBen
      @TechyBen 6 років тому +19

      Does a rock have feelings? If a rock had feelings, would it matter? Why? (honest questions on logic and peoples feelings)

    • @AlabasterJazz
      @AlabasterJazz 6 років тому +62

      It could be said that any matter that is arranged into any pattern is at some level alive. While a rock wouldn't have feelings nearly as obvious as humans, it still might have some sense of being. Breaking a rock into pieces may not cause it to experience pain or anxiety or pleasure, as it's sensory capacity is not sufficient to notice such changes to itself. However it's current makeup and position in the universe is no more or less arbitrary than any other matter in the universe. I guess the follow up question might be: if all matter, including organisms, are ultimately made up of non-living particles, what is life?

    • @autolykos9822
      @autolykos9822 6 років тому +23

      Yet. Growth mindset.

    • @tylerpeterson4726
      @tylerpeterson4726 6 років тому +25

      TechyBen The problem comes when you start asking if mud has feelings and if people have feelings. Mud and people are generally made of the same materials. It’s just that we are organized in a way that gives us feelings. The religious and non-religious can debate if the soul exists or not, but scientifically we can only differentiate between mud and life based on its level of organization. And so it holds that a highly organized piece of silicon (a computer chip) could also have feelings.

  • @mother3946
    @mother3946 Рік тому +7

    His Clarity and simplicity in unpacking a complex topic is just out of this world.

  • @d34d10ck
    @d34d10ck 6 років тому +361

    To call this impressive would be an understatement. That's amazing, fantastic, unbelievable, highly interesting and scary all at once.

    • @daniellewilson8527
      @daniellewilson8527 3 роки тому

      Patrick Bateman why would it be scary?

    • @d34d10ck
      @d34d10ck 3 роки тому +7

      @@daniellewilson8527 Most technologies can be scary, since they all have the potential of being misused.
      AI can particularly scary, since we use it for systems that are to complex for us to understand.
      So what we do, is handing these complexities over to a computer to handle, in the hope that it handles them the way we think it should. But the truth is, that we don't really know what it does and if we decide to use such technologies in our weapon systems for example, then it starts getting scary.

    • @insanezombieman753
      @insanezombieman753 3 роки тому +14

      @@d34d10ck Interesting. Now let's hear what Paul Allen has to say about this

    • @h0stI13
      @h0stI13 2 місяці тому +1

      What do you think about it now?

    • @d34d10ck
      @d34d10ck 2 місяці тому

      ​@@h0stI13I can no longer imagine a life without generative AIs. As a developer, I use them all the time and my productivity has increased immensely because of them.

  • @CarterColeisInfamous
    @CarterColeisInfamous 6 років тому +339

    these are some of the coolest networks ive seen so far

  • @recklessroges
    @recklessroges 6 років тому +389

    The Dell screens have come to worship the Commodore PET.

  • @madumlao
    @madumlao 6 років тому +256

    I love how quickly he moved past neural networks having feelings.
    "But neural networks don't have feelings (yet) so that's really not an issue. You can just continually hammer on the weak points, find whatever they're having trouble with, and focus on that"
    You just know that our robot masters are just going to replay this over and over again in the trial against humanity.

    • @bionicgirl6826
      @bionicgirl6826 Рік тому +4

      haha you're so funny

    • @qwertysacks
      @qwertysacks Рік тому +1

      fish dont have feelings either but i have no qualms against sardine canning companies for packing millions of sardines a year. its almost like most intelligent agents dont care about automatons nor should they

    • @harrygenderson6847
      @harrygenderson6847 Рік тому +11

      @@qwertysacks Fish do have feelings. They have endocrine and nervous systems, and can act scared or whatever. Not that I care much about those feelings, but it's still non-zero. The narrow forms of AI we have at the moment do not have sufficient complexity for feelings.

    • @pigeon3784
      @pigeon3784 Рік тому

      @@harrygenderson6847 Nor will they for many years. It’s a non-issue.

    • @KitsuneShapeShifter
      @KitsuneShapeShifter Рік тому

      I'm starting to think you're right...

  • @JamesMBC
    @JamesMBC 6 років тому +40

    Man, one of my favorite videos on this channel. How did I miss it?
    Not only does it make you think about the endless potential of machine learning, it also sheds some light into how natural brains might work. Maybe even a basic aspecto of nature of creativity.
    Getting my mind blown again!

  • @bimperbamper8633
    @bimperbamper8633 6 років тому +14

    Only discovered this channel recently and I've been watching nothing but Computerphile videos for a whole week. Love the content you do with Rob Miles - his field of study combined with his explanations make these my favorite videos to watch.
    Thank you!

  • @realityveil6151
    @realityveil6151 6 років тому +116

    Lost it at "Neural Networks don't have feeling yet."
    It was just the casual way he threw it out there and took it as the most normal thing in the world. Like "Yet" makes total sense.

    • @daniellewilson8527
      @daniellewilson8527 3 роки тому +1

      RealityVeil does it not? The first multicellular organisms didn’t have feelings(emotions) over time, emotions were produced, as well as brains

    • @PaulBillingtonFW
      @PaulBillingtonFW 3 роки тому +1

      I'm afraid that is a common issue in AI. NN might become aware and acquire feelings. Some people still believe that animals do not have feelings. I keeps the world nice and simple.

    • @staazvaind3869
      @staazvaind3869 3 роки тому +3

      just a matter of input data. hormones and brain / body health and their part in psychology in random situations. it will connect the dots at some point. one could argue "aren't those feelings simulated?" but then ask yourself: "aren't yours?". the structure of mind bases on the structure of input. thats why you shouldnt be afraid of AI with feelings but BIG DATA !

  • @DotcomL
    @DotcomL 6 років тому +23

    I love the "finding the weakness" analogy. Really helped me to understand.

  • @samre3006
    @samre3006 5 років тому +19

    Never really understood GANs before. Thank you so much for making this so intuitive. Eternally grateful.

  • @slovnicki
    @slovnicki 4 роки тому +68

    "..which is kind of an impressive result." - understatement of the century

    • @jork8206
      @jork8206 3 роки тому +3

      Gotta love latent spaces. My favorite was a network that showed a significant correlation between - and - . Assigning any direct meaning to that could be a leap of logic but when you think about it, cats have more visually feminine features than dogs, generally speaking

  • @BenGabbay
    @BenGabbay 6 років тому +2

    This is literally one of the most fascinating videos I've ever seen on UA-cam.

  • @viniciusborgesdelima2519
    @viniciusborgesdelima2519 Рік тому +1

    Literally the best explanation possible for such a dense topic, congrats my man, you are incredible!

  • @szynkers
    @szynkers 6 років тому +10

    The only instance that I can remember when a science video presented on my level of understanding genuinely blew my mind at the end. The research on artificial neural networks will surely change computing as we know it.

  • @tumultuousgamer
    @tumultuousgamer 2 роки тому +3

    That last bit was super interesting and mind blowing at the same time! Excellent video!

  • @lesbianGreen
    @lesbianGreen 5 років тому +36

    holy moly, this dude has a gift for explaining. awesome work

  • @airportbum5402
    @airportbum5402 Рік тому +1

    I think it's so cool that there is a Linksys WRT-54G and a Commodore PET in the background and they're discussing topics so modern.

  • @tarat.techhh
    @tarat.techhh 3 роки тому +23

    I wish i could talk to this guy once... He seems so cool and intelligent at the same time

    • @awambawamb4783
      @awambawamb4783 2 роки тому

      Approach him with wine and a supercapacitor. and a throwaway guitar.

  • @macronencer
    @macronencer 6 років тому +7

    Love the Commodore PET on the shelf! I played with one of the original PETs when they first came out (the one with the horrid rectilinear keyboard!). We eventually got four of the later models at my school, and before long we were happily playing Space Invaders when the teachers weren't looking... and then doing hex dumps of Space Invaders, working out how it worked, and adding a mod to give it a panic button in case the teacher came into the room so you could hit the button and look as if you were working. To be honest, I'm not sure they would have cared, because we probably learned more by doing the hex dump than we would have with our usual work!

    • @raapyna8544
      @raapyna8544 Місяць тому +1

      Oh the effort kids will put in in order to avoid work!

  • @meghasoni7867
    @meghasoni7867 Рік тому

    High-level concepts explained so beautifully. Fantastic!

  • @LP6_yt
    @LP6_yt 6 років тому +66

    Love the Commodore PET on the shelf. Class.

    • @greywolf271
      @greywolf271 6 років тому +3

      Stuff a GAN into 64k. Reminds me of the Chess player written for 4k ram

    • @meanmikebojak1087
      @meanmikebojak1087 4 роки тому +1

      I've got a Commodore PET on a shelf too. Mine walks off during POST, so it isn't used anymore. But it looks classy on the shelf.

  • @fast1nakus
    @fast1nakus 5 років тому +2

    Im pretty sure this is the best format of learning something on youtube

  • @animanaut
    @animanaut Рік тому +3

    wild to view this video again in 2023

  • @chrstfer2452
    @chrstfer2452 6 років тому +39

    "Right now, they're just datapoints" I like this guy

  • @cazino4
    @cazino4 5 років тому +1

    This guy presents fantastically. Such an interesting topic... I remember seeing an online CS Harvard lecture around a decade ago that used the same concept (having the system compete with another instance of itself) to train a computer chess player...

  • @georginajo8441
    @georginajo8441 3 роки тому +3

    Wow, how can you make something so complex be so easy to understand? Thank you man

  • @kashandata
    @kashandata 3 роки тому

    The best explanation of GANs I have ever come across.

  • @milomccarty8083
    @milomccarty8083 3 роки тому

    Studying computer science now. These videos give me inspiration to try to connect concepts outside of the classroom

  • @dibyaranjanmishra4272
    @dibyaranjanmishra4272 6 років тому +10

    excellent explanation!!! one of the best videos ever on computerphile

  • @tohamy1194
    @tohamy1194 6 років тому +16

    I could watch this all day.. like I did yesterday with numberphile :D

  • @marcelmersch6797
    @marcelmersch6797 6 років тому +2

    Well explained. Best video about gans i have seen so far.

  • @Felixkeeg
    @Felixkeeg 6 років тому +351

    I honestly more often than not click the video based on whether Rob is hosting.

    • @dylanica3387
      @dylanica3387 6 років тому +6

      Same here

    • @VentraleStar
      @VentraleStar 6 років тому +12

      He's cute

    • @HailSagan1
      @HailSagan1 6 років тому +34

      I like all the computerphile regulars, but yeah Rob is great. I recommend checking out his personal channel that focuses on AGI's, it's linked in the above description!

    • @cubertmiso4140
      @cubertmiso4140 6 років тому

      Cast is great for any channel. Only Philip Moriarty gives weird vibes.

    • @JamesMBC
      @JamesMBC 6 років тому +4

      This guy knows. Rob is the best, and this is fascinating!
      It makes it irresistible to get involved with machine learning.

  • @surrealdynamics4077
    @surrealdynamics4077 3 роки тому +3

    This is so interesting! This is the way thispersondoesnotexist "photos" are made by the machine. Super cool!

  • @Bloomio95
    @Bloomio95 2 роки тому

    That last part about the latent space was really valueable insight! Hard to come by

  •  6 років тому +1

    Love this guy. Harnessing you concepts here!

  • @TankSenior
    @TankSenior 6 років тому +12

    That was extremely interesting, thank you for making this episode.

  • @AdityaRaj-bq7dz
    @AdityaRaj-bq7dz 2 роки тому

    the best video on gan I have ever seen, probably this can help me to return to ML

  • @truppelito
    @truppelito 6 років тому +2

    20 minute video about AI by Rob Miles? YES PLZ

  • @R.Daneel
    @R.Daneel 2 роки тому +3

    I love seeing this in 2022, and comparing this to DALL-E, GPT-3, etc. Wow. Five years later, and it's generating "Pink cat on a skateboard in Times Square" at artist quality.
    (@16:25 - Yup. You do. And it does.)

  • @BatteryExhausted
    @BatteryExhausted 6 років тому +6

    With the human analogy, an interesting idea is that; You don't just focus on the weak area of learning but you also adapt your teaching technique to enable learning. You change your approach. It may be the difficulty in learning is not a fault of the student but a 'bug' in the teaching method
    [1 & 7 look similar, our learning strategy is based on a simplistic shape recognition concept, we adapt our recognition concept (we focus on a particular aspect of the image for example)
    and thus the learner has a 'light bulb' moment as they 'get the point']

  • @nullptr.
    @nullptr. 6 років тому

    Love the video, everything is well explained and easy to understand.

  • @RobinWootton
    @RobinWootton 2 роки тому

    Hard to imagine watching television again, when such interesting programs are broadcast here instead.

  • @caty863
    @caty863 7 місяців тому +1

    "...but neural networks don't have feelings yet."
    Robert Miles throws this out there nonchalantly. I think he knows something we don't. What is it?

  • @knightshousegames
    @knightshousegames 6 років тому +71

    "So cats equal zero and dogs equal one. You train it to know the difference" Ultimate final test: show it a Shiba Inu.

    • @GhostGuy764
      @GhostGuy764 6 років тому +11

      knightshousegames Shiba look too happy to be cats.

    • @knightshousegames
      @knightshousegames 6 років тому

      That is what they call a fringe case. My guess is the machine would try to return a 0.5

    • @homer9736
      @homer9736 6 років тому +3

      knightshousegames i think you should ban 0.5 because thats right for both cases always, the machine cant learn from that

    • @hellfiresiayan
      @hellfiresiayan 5 років тому +8

      No because in the end you can tell the network that it is a dog, and it could alter its biases based on that result, so the next 100 times you show it a shiba inu, it might be able to give a better answer. Whether that would negatively affect its ability to identify a cat, however, I have no idea.

  • @morkovija
    @morkovija 6 років тому +1

    Great explanation Rob, thank you!

  • @csIsKrass
    @csIsKrass 4 роки тому

    This is amazing! Keep up the great work!

  • @cameroncroker8389
    @cameroncroker8389 3 роки тому

    WD, love Rob's explanations!

  • @MrCmon113
    @MrCmon113 4 роки тому +1

    "Kind of impressive" is a massive understatement. It's one of the most awesome and scary things I know.

  • @wesleyk.8376
    @wesleyk.8376 5 років тому

    Deeply sophisticated trial and error to produce meaningful visual results. Awesome

  • @alissondamasceno2010
    @alissondamasceno2010 6 років тому

    THIS is the best channel ever!

  • @ikennanw
    @ikennanw 3 роки тому

    I wish I saw this earlier. You guys are amazing.

  • @nateshrager512
    @nateshrager512 6 років тому +1

    Fantastic explanation, love this guy

  • @tabnovasolutions1593
    @tabnovasolutions1593 4 роки тому

    Wow excellent explanation of GAN - thanks a lot

  • @mickmickymick6927
    @mickmickymick6927 5 років тому

    The videos on Rob's channel are so much better edited

  • @achimvonprittwitz9508
    @achimvonprittwitz9508 5 років тому +2

    Wow this video is amazing. Can he do some live coding/example? Would be interesting to see the pictures.

  • @tonyduncan9852
    @tonyduncan9852 Рік тому +2

    The common room elephant: consciousness is _relative,_ and shared by electronic machinery, and all of Earth's animals, including elephants, and not excluding Man.

  • @petercourt
    @petercourt 3 роки тому

    Latent space description was great!

  • @MetsuryuVids
    @MetsuryuVids 6 років тому +18

    Another cool thing he didn't mention about that experiment with the faces:
    They also tried to generate a picture with only features that were found on men, and one with only pictures that were found on women, and the network ended up generating "grotesque" pictures, that were basically caricatures of a "man" or a "woman".

    • @daniellewilson8527
      @daniellewilson8527 3 роки тому

      Metsuryu Is it possible to see these images?

    • @MetsuryuVids
      @MetsuryuVids 3 роки тому +1

      @@daniellewilson8527 I saw these somewhere a long time ago, but you can probably try googling something like "AI generated male/female faces"

    • @toomuchcandor3293
      @toomuchcandor3293 3 роки тому +2

      @@MetsuryuVids bro thats too general of a search

    • @MetsuryuVids
      @MetsuryuVids 3 роки тому

      @@toomuchcandor3293 Yeah, sorry, I don't remember much else. I tried to find it again sometime ago, but with no success.

  • @picpac2348
    @picpac2348 6 років тому +20

    Would love to see some example pictures of the generated and real pictures.

  • @jasurbekgopirjonov
    @jasurbekgopirjonov 5 місяців тому

    an amazing explanation of GANs

  • @cl8484
    @cl8484 6 років тому +3

    Very interesting topic and an excellent explanation by Rob! I hardly ever write youtube comments, but this video is great; it deserves all the love it is getting.

  • @varunjaggi6208
    @varunjaggi6208 4 роки тому

    what an amazing video, i wish i had a teacher like him!

  • @imchukwu
    @imchukwu 6 років тому +1

    hi, thanks for the video, really great. please i would like to know the least number of samples to train a GAN system with as well as how long an ideal training will last with a single GPU and 2 CPU Cores. just an estimate.

  • @bipolarminddroppings
    @bipolarminddroppings 2 роки тому

    The fact that he add "yet" is both exciting and chilling.

  • @AnindyaMahajan
    @AnindyaMahajan 5 років тому

    It's completely flabbergasting to me how far science has come in the last decade alone!

  • @forkontaerialis5347
    @forkontaerialis5347 6 років тому +3

    This man is the only reason I stay subscribed, he is fantastic

  • @YanivGorali
    @YanivGorali 2 роки тому

    Such a great explanation!

  • @sidrasafdar7325
    @sidrasafdar7325 2 роки тому

    your video and way of explanation is easy and very understandable that even a person like me who is new to GANs can fully understand every point. but can you please make a video on how to optimize GANs with Whale Optimization Algorithm. I have searched a lot about how to optimize GANs with WHO but couldn't find any related coding example.

  • @ScottMorgan88
    @ScottMorgan88 6 років тому +2

    Great explanation. Thank you!

  • @Rgmenkera
    @Rgmenkera 6 років тому +3

    yes, more of this guy!

  • @chrisofnottingham
    @chrisofnottingham 6 років тому

    Very interesting. I think perhaps the explanation focused on the interacted between the generator and discriminator such that we lost sight of the system still needing actual pictures of cats.

  • @AxelWerner
    @AxelWerner 6 років тому

    talking about developing skynet and advanced artificial intelligence, while in the background the keep a Commodore PET as their Backup-System ^-^ PRICELESS!

  • @alienturtle1946
    @alienturtle1946 7 місяців тому

    Bro understood the cyclical nature of GANs so well that even his explanation turned cyclical

  • @Eskermo
    @Eskermo 6 років тому +3

    I'm pretty excited about GANs, but what about dealing with when either the generator or discriminator gets a big edge over the other during training and basically kills further progress of the first network? Robert spoke on training on where the discriminator is weak, but it would be nice to have some more details.

  • @nicksundby
    @nicksundby 3 роки тому

    See that Commodore PET on the shelf, I used to use those at college in the late 70's.

  • @Zizuzot
    @Zizuzot 3 роки тому

    Amazing explanation, thanks!

  • @jonathanmarino7968
    @jonathanmarino7968 6 років тому +331

    "Neural networks don't have feelings.. yet." lol

    • @maldoran9150
      @maldoran9150 6 років тому +15

      He said it so matter of factly and by the by. Chilly!

    • @ArgentavisMagnificens
      @ArgentavisMagnificens 6 років тому +4

      So you watched the video too?

    • @surrealdynamics4077
      @surrealdynamics4077 3 роки тому

      I also payed specific attention to that "yet". It's super cool and scary to live in a time when we can confidently say that software might have feelings in the future

  • @onionpsi264
    @onionpsi264 4 роки тому +2

    Did i miss the part of the series where we learn how the generator is actually structured/produces images? The discriminator is a standard classification neural net, which I know has been covered but how does a neural net output an image rather than a class, is the final output layer one pixel in the image?
    Do the "directions in picture space that correspond to cat attributes" that he references around 17:30 correspond to eigenvectors of the generator matrix?

  • @abdelhadi6022
    @abdelhadi6022 6 років тому

    Very nice explanation man !

  • @The_Night_Knight
    @The_Night_Knight 8 місяців тому

    What if you trained another lstm model to convert the text input into meaningful inputs to the generator? So instead of manually adding and subtracting values in the input vector until you get some high dimensional line or wave or something you just automate the process?

  • @praveshgupta1993
    @praveshgupta1993 3 роки тому

    Nicely explained in layman terms...liked it

  • @tega2754
    @tega2754 6 років тому

    Nice intuitive explanation

  • @JotoCraft
    @JotoCraft 6 років тому +29

    Are the generators producing the same image, for the same input?
    If so could it mean, that continuously changing the input by small steps creates kind of an animation?
    If this really is the case I would really like to see such a movie :)

    • @philipphaim3409
      @philipphaim3409 6 років тому +27

      Check out arxiv.org/pdf/1511.06434.pdf , on page 8 the authors have essentially done that!

    • @fleecemaster
      @fleecemaster 6 років тому +3

      Wow, thanks Philipp, fascinating! Page 11 also!

    • @JotoCraft
      @JotoCraft 6 років тому +4

      Thanks, yeah I hoped, that the pictures would be better already, but I guess that will change over time :)
      Specially the faces fall in the uncandy-valley I'd say. But beside that those examples are exactly what I meant.

    • @RobertMilesAI
      @RobertMilesAI 6 років тому +11

      Check out my follow-up video: watch?v=MUVbqQ3STFA

  • @audreyh6628
    @audreyh6628 4 роки тому +1

    Absolutely fantastic mind/teacher. I am a complete and utter noob to any of these ideas and even I could follow along. Thank you so much

  • @stuartg40
    @stuartg40 4 роки тому

    This guy is on the ball: a rare trait indeed.

  • @5up5up
    @5up5up 6 років тому

    the vector arithmetic things is awesome

  • @seditt5146
    @seditt5146 6 років тому

    I love that he said Yet...."Neural networks don't have feelings yet" so nonchalant

  • @Im-Hacker
    @Im-Hacker Рік тому +1

    I'm working on GAN for data augmentation and will be happy to connect with interested ones

  • @SlobodanDan
    @SlobodanDan 6 років тому +1

    Wow. That was a pretty amazing insight. Hope for non-harmful super-intelligence? If we can do broad definitions of concepts like man's face, woman's face and glasses, then perhaps even trickier concepts can be tackled in time.

  • @vjp2866
    @vjp2866 3 роки тому

    Awesome ! Excellent explaining !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

  • @Kareem-hl8hj
    @Kareem-hl8hj 5 років тому

    super clever way to explain
    thanks

  • @michaeldecarvalho8022
    @michaeldecarvalho8022 Рік тому

    This is amazing!

  • @peabnuts123
    @peabnuts123 6 років тому +4

    The last part where Rob talks about how meaningful features are mapped to the latent space are a demonstration of how machine learning can strongly pick up on and perpetuate biases. e.g. If you fed a model a large dataset of people and included whether people were criminals or not as part of your dataset, and you fed it a large amount of criminal photos wherein the subject was dark-skinned, the model may learn that the "Criminal" vector associates with the colour of a person's skin i.e. you are more likely to be guilty of ANY CRIME if you are black.
    If we put these kinds of models in charge of informing decisions (say, generating facial sketches for wanted criminals) we might encode harmful biases into systems we rely on in our day-to-day lives. Thus, these kinds of machine learning need to handled very carefully in real-world situations!

    • @andrewphillip8432
      @andrewphillip8432 2 роки тому +1

      I think this type of machine learning algorithm might actually be somewhat resistant to what you describe, because in order for the discriminator to be consistently fooled, the generator needs to be creating samples that span the whole population of criminal photos. Criminals might have a statistically most likely race, but if the generator is only outputting pictures of that race, then the discriminator would be able to do better than 50% at spotting "fakes" by assuming that all pictures of that race were generated and not real. So the discriminator would actually undo the generator's bias for some time by being reverse-biased. So I think once the generator was fully trained it would be outputting images of criminals of all races, weighted by how many images in the training set were of each race.
      But now that I think about it, if we are using current arrest records as the training material for the GAN, then any current biases that exist with who police choose to arrest will show up in the GAN also, so developing a completely unbiased neural network for what you describe could indeed be challenging.

  • @w000w00t
    @w000w00t Рік тому

    2022 was the year of latent diffusion!! Disco diffusion, mid journey, and now Stable diffusion is about to make their weights public!! This stuff is so fascinating! :)
    Great talk about the way!!!

    • @user-xh9pu2wj6b
      @user-xh9pu2wj6b Рік тому +1

      And the best thing is that diffusion models aren't GANs, so they won't suffer from mode collapse and other pain like that.

  • @AB-Prince
    @AB-Prince 5 років тому +1

    does that Commodore PET still work.
    i'd like to see a video on the PET's internals

  • @aliasad8342
    @aliasad8342 3 роки тому

    Such a nice explanation :)

  • @Athenas_Realm_System
    @Athenas_Realm_System 6 років тому +34

    there are quite a few youtubers that have a lot of content on them playing around with GANs

    • @CaptTerrific
      @CaptTerrific 6 років тому +5

      Any recommendations for particularly interesting ones?

    • @Athenas_Realm_System
      @Athenas_Realm_System 6 років тому +20

      +Higgins2001 carykh being one I can think of that plays around with using a GAN to generate instrumental music by feeding it image representations.

    • @CaptTerrific
      @CaptTerrific 6 років тому +1

      Thanks!!

    • @hanss3147
      @hanss3147 6 років тому +2

      The GAN wasn't exactly very good though.

    • @keithbaton5493
      @keithbaton5493 6 років тому +2

      If I recall, the most recent AI created for DOTA 2 game uses GANs to decimate professional gamers. OpenAI

  • @rpcruz
    @rpcruz 5 років тому

    Very cool. The only quibble I have with the video is that Rob says things like "this doesn't apply only to networks" and "they can be other processes". Actually, the GAN procedure requires a gradient descent framework because it uses the discriminator's gradients to fix the generator. Maybe you can use other stuff, but it's not as open as he makes it sounds, and I don't know of anything other than neural networks being used. (EDIT: Actually, he explains all this at around 12:10.)