NLP Demystified 10: Neural Networks From Scratch

Поділитися
Вставка
  • Опубліковано 8 січ 2025

КОМЕНТАРІ • 14

  • @futuremojo
    @futuremojo  2 роки тому +7

    Timestamps
    00:00:00 Neural Networks I
    00:00:39 Neural networks learn a function
    00:03:34 Why we need a bias
    00:04:49 Why we need a non-linearity
    00:05:55 The main building block of neural networks
    00:09:17 Combining units into neural networks
    00:11:08 Neural networks as matrix operations
    00:13:51 Neural network setups and loss functions
    00:23:45 Backpropagation: Learning to get better
    00:33:45 Neural networks search for transformations
    00:34:55 DEMO: Building neural networks from scratch
    01:09:29 Neural Networks I recap

  • @uncannyrobot
    @uncannyrobot 2 роки тому +12

    This is a fantastic explainer, and man you've got a great set of pipes. I'm bookmarking this video for the next time someone asks me how neural networks work.

  • @monicameduri9692
    @monicameduri9692 4 місяці тому +1

    The best explanation so far. Awesome slides! Thanks a lot!

  • @lochanaemandi6405
    @lochanaemandi6405 Рік тому

    omggg, kudos to your efforts!!!!! I really wish you have more subscribers

  • @yogendrashinde473
    @yogendrashinde473 Рік тому +1

    Wow..!! What a great way of explanation. Truly awesome..!!

  • @joshw3485
    @joshw3485 2 роки тому +3

    great video so far!

  • @computerscienceitconferenc7375
    @computerscienceitconferenc7375 2 роки тому +3

    Great explanations!

  • @ts-yr8yz
    @ts-yr8yz Рік тому

    thx u for your hard work, to output this series of video

  • @pipi_delina
    @pipi_delina Рік тому +2

    You have thought me on AI what 2 semesters of AI course has not... Simplified alot

  • @SatyaRao-fh4ny
    @SatyaRao-fh4ny Рік тому

    Very helpful set of videos. However, it is unclear how is it that the weights determined for one set of input values X1 and the corresponding expected output value Y1, will hod for any other set of input values X2 and their corresponding output value Y2? In your example, the weights computed for inputs x1=2, x2=3 and expected output y=0, maybe different for any other inputs and expected output.

  • @rohanofelvenpower5566
    @rohanofelvenpower5566 2 роки тому

    6:17 or a perceptron? I remember my uni teacher did not like the word neural networks because it implies thats how biological brains work but in reality the two have little to do with each other

    • @futuremojo
      @futuremojo  2 роки тому +2

      Yep, I also like getting away from biology-inspired terminology when I can which is why I prefer "unit" or "node". Regarding "perceptron": at 6:48, I explain the difference between units as we use them today and the classic perceptron from the 1950s (the latter doesn't have a differentiable activation function which is why I didn't include the term).

  • @Engineering_101_
    @Engineering_101_ 8 місяців тому

    11

  • @ungminhhoai4510
    @ungminhhoai4510 Рік тому

    can you send me slides?