10.12: Neural Networks: Feedforward Algorithm Part 1 - The Nature of Code

Поділитися
Вставка
  • Опубліковано 8 січ 2025

КОМЕНТАРІ • 185

  • @everything_strength
    @everything_strength 4 роки тому +61

    Thank you. There is NO substitute for someone being able to WRITE on a board, and explain EVERYTHING that they do.

  • @simonhadfield8540
    @simonhadfield8540 4 роки тому +6

    Love this guy. His energy. His explanation. Brilliant teacher

  • @shaileshrana7165
    @shaileshrana7165 4 роки тому +3

    I have never learned programming formally and I started by doing your coding challenges myself. Now I've started on this journey of neural networks because of you. Thank you so much.

  • @wukerplank
    @wukerplank 4 роки тому +3

    I absolutely love this! Step one to NNs is always intuitive and easy, but then it gets complicated. Thanks for taking the time to explain so well an in so much depth!

  • @amalthea7803
    @amalthea7803 4 роки тому +2

    Me and my friends would like to thank you for all your videos. You helped us so much for our Master of Science in this domain. You really helped us to understand the mathematical process and logic behind it. Thanks a lot !
    You're amazing
    random engineering students

  • @zahrakader2964
    @zahrakader2964 5 років тому +1

    I'm so glad I found these videos. I have been struggling with Neural Networks for so long and it's difficult to keep up with other videos. You sir, are brilliant

  • @mrtandon5278
    @mrtandon5278 6 років тому +1

    It is fact that hard algorithms become easy once it gets teach by you. I really appreciate your efforts to make us understand beautiful concept in no time.

  • @lucademma7700
    @lucademma7700 3 роки тому

    You have been able to explain the sigmoid function so fast but also so clear. Thank you so much

  • @Youtuber__
    @Youtuber__ 6 років тому +1

    So many videos cover the code behind the neural network, but it’s so important to understand the fundamentals first before you can code one. There are not many videos that do a good job of explaining the simple math behind neural networks, I have looked. I’m glad I found this one.

  • @cameronnichols9905
    @cameronnichols9905 4 роки тому +1

    It might be worth noting that a perceptron can act as a NAND gate, and the set {NAND} is functionally complete. That means that you can build up any logical computation out of solely NAND gates, therefore, a perceptron can be used to solve any logical expression.

  • @charbelsarkis3567
    @charbelsarkis3567 7 років тому +61

    2 suggestions first make a neural network that learns how to play flappy bird since you already have flappy bird. second make a neural network that teaches a vehicle to drive maybe also because Same reason

    • @TheCodingTrain
      @TheCodingTrain  7 років тому +14

      Great ideas!

    • @samnash6854
      @samnash6854 6 років тому +18

      so this is where it comes from..

    • @PSNDMII
      @PSNDMII 6 років тому +7

      Tesla: Delete dis now.

    • @MichaelHallbsmbahamas
      @MichaelHallbsmbahamas 5 років тому

      Yes I'd love to see you code a neural net controlled car

    • @michil.1192
      @michil.1192 4 роки тому

      @@PSNDMII Tesla of Flatlands: Delete this now

  • @frogmyre485
    @frogmyre485 7 років тому

    I have no idea how long this specific outro has been a thing - but I love it.

  • @akzhere
    @akzhere 5 років тому

    your teaching style really works well for me. I have been watching all your videos in this series. This particular video seemed to be a little repetitive at times and stretched to some extent though. But I understand your intent behind this to let all the viewers get a thorough understanding of flow of algorithm. After watching this there should be no one who can claim that he/she have not understood the algorithm! Keep it up!

  • @zakarialbouhmadi3060
    @zakarialbouhmadi3060 4 роки тому

    Great job !!!
    You're explaining such complicated stuff from its basis in such a simple, modest and relaxed way.
    Keep on this way man !!!

  • @santiagocalvo
    @santiagocalvo 3 роки тому

    I cant tell you how much i enjoy these videos and how much im learning, thanks a lot!!!

  • @aadityanr8556
    @aadityanr8556 2 роки тому

    Thank you for giving us a wonderful intuitive introduction to the world of Neural Networks!

  • @GreenDayMinecraft
    @GreenDayMinecraft 6 років тому +2

    Thanks for doing this video, i'm currently reading "Neural Networks and Deep Learning" by Michael Nielsen and you helped me clarify some math stuff i was struggling with! Greetings from Argentina!

  • @Unplugged-Plug
    @Unplugged-Plug 6 років тому +1

    Great video for a comprehensive revision before the exam. Thank you!

  • @magdelinesanjanaira-nathan4978
    @magdelinesanjanaira-nathan4978 6 років тому +7

    you are freaking awesome!!!!! I just love how you teach!

  • @nikhitagoel532
    @nikhitagoel532 4 роки тому

    Amazing explanation. I am new to the field of data science and these videos about NN have been really helpful. Thank you sooo much

  • @gunarakulangunaretnam3353
    @gunarakulangunaretnam3353 5 років тому

    Hi, I am from Sri Lanka. Your are a good explainer. I am your big fan from today.

  • @PremiumInfantry
    @PremiumInfantry 3 роки тому

    The way you saved my time, hats off.
    Amazing video.

  • @adalloul3108
    @adalloul3108 4 роки тому +1

    Incredibly helpful and great teaching style! Thank you

  • @bobbybob7368
    @bobbybob7368 6 років тому

    I just wanted to say that I love your videos, thanks for taking the time to put all of that together!

  • @scottaspeaks9531
    @scottaspeaks9531 4 роки тому

    This was a great video. I just learned about the feedforward and the 'why' was missing. You really cleared things up.

  • @wolfisraging
    @wolfisraging 7 років тому +38

    I hope u do python, but I still love to watch ur tutorials
    Big fan😊

  • @WristWatcher
    @WristWatcher 7 років тому

    You have the ability to teach the masses basic to intermediate coding to get us started. Are there or can you make a series strictly from beginning to employable, you would be the GOAT.

  • @SpatulaAndEasel
    @SpatulaAndEasel 5 років тому

    Kindly correct me if I am wrong.
    At 16:14, it is mentioned that:
    H[i] = W[i][j] . I[i]
    Shouldn't it be as follows:
    H[i] = W[i][j] . I[j]
    i.e., the second term on RHS should be I subscript j instead of I subscript i.

  • @IbakonFerba
    @IbakonFerba 7 років тому +10

    The link for the next part leads to the previous video ^^'

  • @lohithArcot
    @lohithArcot 4 роки тому

    This needs to be more popular than it is. Now I know how effective the click baits are! Phew

  • @houdalmayahi3538
    @houdalmayahi3538 5 років тому

    The funniest and the best explanation ever. Thank you!

  • @elegantreaction5453
    @elegantreaction5453 4 роки тому

    Amazing video. Helps a lot in my Data structures course of project with deep learning theme

  • @knp4356
    @knp4356 5 років тому

    dot product was correct notation...(x, y) . (x1,y1) = (x*x1) + (y* y1) and you get a scaler which you pass through some activation function(relu/sigmoid). great video. thanks.

  • @w3sp
    @w3sp 2 роки тому

    Why did the way that 'weights' are named change from Video 10.5 to this video 10.12?
    In the 10.5 video the numbers read from left to right, so that e.g. w12 means the weight from input1 to hidden2.
    However in this 10.12 video it's the opposite, read from right to left. So now w12 means the weight that's going from input2 to hidden1.
    Is this an error? Why did it change? Can anybody explain?

  • @somecho
    @somecho 4 роки тому +4

    27:29
    "Acting!"
    What was this?????

  • @Irwansight
    @Irwansight 4 роки тому

    Your explanation is amazing. Thank you Sir.

  • @uchennanwosu5327
    @uchennanwosu5327 2 роки тому

    Thanks Dan, you might have saved me from dropping an AI Deeplearning program after Day 1 videos.

  • @farrukhsaeed4615
    @farrukhsaeed4615 5 років тому

    @ 15:20 ... it should be j rows for the I (input)

  • @ymeng7442
    @ymeng7442 5 років тому

    There were typos in your functions,
    1) around 16:45, the subscript of X is wrong, the formula should be H_i = sigmoid(W_ij * X_j + B_i)
    2) around 26:45, we only have one bias between the hidden layer and the output layer, so in the rightmost matrix, both should be b1.

  • @Harshit-cv4ie
    @Harshit-cv4ie 6 років тому +1

    You really enjoy teaching!!!

  • @unnikked
    @unnikked 7 років тому

    Looking forward for the next video. However I like how you crack on new topics. I don't mind different notations as long as it's settled and make sense :)

  • @prisonbreak931
    @prisonbreak931 6 років тому +1

    Some may find your videos, notations or explanations difficult to follow. I find they really click with me! Must be my (or our??) brains. Cheers!

  • @NitinSharma-tq1qb
    @NitinSharma-tq1qb 4 роки тому

    18:15 : I think it should be W[i][j] .I[j] + B[i] >>> Since i = {1,2} and j = {1,2,3}

  • @th4tgi371
    @th4tgi371 7 років тому

    yeah Neural Networks!!! I'm trying to build a NN but I don't know how to. I've been waiting or these for about 3 months. Thanks Dan!

  • @007aha1
    @007aha1 7 років тому +2

    perfect i will from now on switch in my standby mode and wait for the next live stream :o

  • @janmichaelbesinga3867
    @janmichaelbesinga3867 5 років тому

    complex things aint boring at all only at coding train! choo chooo

  • @nafassaadat8326
    @nafassaadat8326 3 роки тому

    You make me laugh during learning which is interesting. Tnxxxxxx

  • @asharkhan6714
    @asharkhan6714 6 років тому

    The input layer is a layer but it's not counted as a layer when calculating the depth of a neural network. The depth of a neural network is calculated as -- number of hidden layers + 1 (for the output layer). So, the above one is a two-layer perceptron. Correct me if I'm wrong :)

  • @hydra4370
    @hydra4370 7 років тому +9

    Just took a programming midterm, I wonder what my teacher would think about neural networks after just starting off java with us.

    • @b2bb
      @b2bb 7 років тому +3

      You have much to learn young padawan.

    • @hydra4370
      @hydra4370 7 років тому

      Definitely!

    • @fr3fou
      @fr3fou 4 роки тому

      2 years later, what's up?

  • @gabrielaugusto6001
    @gabrielaugusto6001 7 років тому

    Dude, your videos are amazing

  • @andruw5075
    @andruw5075 5 років тому

    You're an actual genius

  • @aayushmanchatterjee4688
    @aayushmanchatterjee4688 3 роки тому

    Well how to have a weight matrix ...we do know how to change the weight after the backpropagation..but what about the weights during the feed-forward process. We do know that the matrix is random but is my question is that..are there any measures of that randomness of the weight?

  • @OneShot_cest_mieux
    @OneShot_cest_mieux 7 років тому

    Thank you really for your help, I am french I don't undersand all on 3blue1brown's videos,
    I can't wait for backpropagation

  • @sonu_1911
    @sonu_1911 3 роки тому

    pls try backpropagation on two hidden layer networks and find the value of z for 2nd layer weight

  • @maxwellsimiyu2844
    @maxwellsimiyu2844 2 роки тому

    you need to come to Africa on my treat, you have taught me alot

  • @nuraisyatulsafarinamohamad2737
    @nuraisyatulsafarinamohamad2737 3 роки тому

    hello, did you know anything about robust regression model which is MM-estimation? I need to make a hybrid model which is combination of MM-estimation with Artificial Neural Network

  • @edeleuse
    @edeleuse 4 роки тому

    I did not understand the use of X3 to resolve XOR but i'm not a realy good english listener. Thanks for some clarification of what is an hidden layer. I would know how to use more than two layer and why using it.

  • @annperera6352
    @annperera6352 3 роки тому

    Dan please do some videos on implementing hybrid machine learning models, how to implement collaborative filtering,rule based classification, association rules.

  • @amjadaljadiri2289
    @amjadaljadiri2289 2 роки тому

    thank you sir you are amazing (warm regards from iraq to you )

  • @chemist000mada
    @chemist000mada 5 років тому

    Excellent searies,, keep good works

  • @wawihabouba3620
    @wawihabouba3620 7 років тому

    thank you dan , you're the best

  • @htaed23
    @htaed23 7 років тому +2

    Hey where's my comment thread on rather its a two layer or three layer?

  • @TimoWelde
    @TimoWelde 6 років тому

    Is x3 in this XOR example not already the bias for Input->Hidden?

  • @jithendra16
    @jithendra16 6 років тому

    Can u plz explain how to group datasets on the basis of tissue subtype on the basis of feed forward neural network. Because I am doing a project on drug prediction.

  • @jineshchoudhary8108
    @jineshchoudhary8108 4 роки тому

    I should first thank you for making us understood that why only single neuron will not work but to be very honestly I did not get that how two neurons will work ? .... These two neuron will have the output out of activation function but not able to imagine that how this is actually helping further to narrow down our problem...can there be a visual way of understanding what is happening after hidden layers are passing input to the next layers ?

  • @abcdxx1059
    @abcdxx1059 6 років тому

    This is confusing me in tensorflow the weight matrix is like rows=input and columns = hidden nodes while it's the opposite here what am I getting wrong?

  • @omicron296
    @omicron296 Рік тому

    Adorei a sua explicação! Muito obrigado!🤗

  • @ESnipezHD
    @ESnipezHD 7 років тому

    Very clear, great video!

    • @TheCodingTrain
      @TheCodingTrain  7 років тому

      I am glad to hear this b/c I felt so unsure about this video!

  • @chandangowda5911
    @chandangowda5911 5 років тому

    I guess we needed more about the values for the bias variables

  • @pixelballgaming2243
    @pixelballgaming2243 6 років тому +1

    How would that Algorithm work if i had 2-Dimensional Layers?

  • @offgridvince
    @offgridvince Рік тому +1

    Perfect!

  • @84xyzabc
    @84xyzabc 4 роки тому

    a good addition would be to add multiclass classification into this lecture.

  • @ManosChalvatzopoulos
    @ManosChalvatzopoulos 7 років тому +1

    Why didn't you make videos about NN when I was studying them back in 2013? Such a delight way to understand something so complicated!

    • @JoseMorales-vv5fx
      @JoseMorales-vv5fx 7 років тому

      what's your major? Right now I am doing my basics at a community college. Then I am going to transfer to a 4 yr university majoring in software engineering. I have to learn Java and C++ lol. I know it is going to be hard but it is possible.

  • @theRECONN
    @theRECONN 7 років тому +1

    Wouldn't it be easier if we multiplied V * W + B where V = [v1, v2, v3] instead of [ [v1], [v2], [v3] ] (so like transposed). It makes more sense to me that Values go through Weights instead of the other way :) Great series tho!

  • @MatthewBishop64
    @MatthewBishop64 6 років тому

    If an ANN is a universal function approximator, then I could make one with one input node, one output node, and some number of hidden nodes, and train it so approximate a function like sin, cos, tan, square, square root, etc.? I think I might just try that.

  • @desalegnbelay9088
    @desalegnbelay9088 6 років тому

    wow interesting and fun lecture i love it!

  • @magdelinesanjanaira-nathan4978
    @magdelinesanjanaira-nathan4978 6 років тому

    Hi, Just wondering if you will do something on the Hopfield network? It is a great updating network! I hope so.. *fingers cross*

  • @kamalebrahimi8623
    @kamalebrahimi8623 6 років тому

    If the output of sigmoid ( activation function) is a value between 0-1,(like 0.79) is it necessary we round it ?
    i mean :
    sig =>0.5 --->=1
    sig =0

    • @claudiog.7397
      @claudiog.7397 5 років тому

      Not sure, but actually to have a 0 or 1 output I think sigmoid is not the right activation function, rather it should be a step function (Heaviside function). May be someone else may correct me if I'm wrong.

  • @maanasnegi6212
    @maanasnegi6212 5 років тому

    If the bias weight is 0 then doesn't the whole purpose of a bias gets removed?

  • @nandkishorenangre3541
    @nandkishorenangre3541 5 років тому

    It went right into my head :)

  • @SabbirAhmed-qy7lk
    @SabbirAhmed-qy7lk 5 років тому

    why we have to multiply input value with weight?

  • @RAJAT100100
    @RAJAT100100 6 років тому

    Please do a series on unsupervised learning using autoencoders ....

  • @sorenstorm
    @sorenstorm 7 років тому +3

    FYI. Couldnt find the link for the book.

    • @sorenstorm
      @sorenstorm 7 років тому

      OK. Sorry. Found the Amazon link. Should listen to what you say in the video. :-)

  • @rebornreaper194
    @rebornreaper194 3 роки тому

    Thanks, this is really neat :)

  • @Martin-ep8dy
    @Martin-ep8dy 5 років тому

    Great video!

  • @sydneythefitdr
    @sydneythefitdr 6 років тому

    Interesting video !

  • @jean-pierrefortinjipinov4243
    @jean-pierrefortinjipinov4243 6 років тому

    Fantastic! Great, thanks. I like your teaching, being a technical school teacher . Question: Do I have to assume or understand that ''weight'' could be also a ''gain'', like in an electronic circuit? (i.e.: amplification of a signal?)

    • @TheCodingTrain
      @TheCodingTrain  6 років тому

      Yes, I think that's a great way of thinking about it!

  • @anonymoussloth6687
    @anonymoussloth6687 4 роки тому

    can someone explain why xor can't be achieved by one perceptron? l

  • @raedbettaieb9212
    @raedbettaieb9212 7 років тому

    I fucking love your videos man. Thank you for everything

  • @tallwaters9708
    @tallwaters9708 7 років тому +1

    Are you going to cover LSTM RNN also?

  • @MatthewCollinsPHD
    @MatthewCollinsPHD 4 роки тому

    Would have been nice to have the XOR problem actually explained.

  • @blackblather
    @blackblather 5 років тому

    thanks for the clarifications

  • @nolachronicle7386
    @nolachronicle7386 6 років тому

    Trying to learn as much programming as I can before college to get ahead of everyone.

    • @CarloL525
      @CarloL525 6 років тому +1

      If you get this lesson you're so much ahead -> of professionals :D

  • @Toopa88
    @Toopa88 7 років тому +25

    7:22 lol

    • @CarloL525
      @CarloL525 6 років тому

      It was so funny that I replayed it several times :D

    • @claudiog.7397
      @claudiog.7397 5 років тому

      me too, LOL

  • @JoseMorales-vv5fx
    @JoseMorales-vv5fx 7 років тому +21

    You look like Gilfoyle from Silicon Valley

    •  4 роки тому +2

      Don't insult my man like that, he is a humble guy

  • @khusnaaullia2915
    @khusnaaullia2915 7 років тому

    Can u do .. simple exponential smoothing ?

    • @khusnaaullia2915
      @khusnaaullia2915 7 років тому

      Maanlamp umm i mean a forecasting method .. single, double or triple exponential ..
      Is it same like u explain to me ?
      Thanks before

  • @priteshprakash950
    @priteshprakash950 4 роки тому

    great work

  • @haydergfg6702
    @haydergfg6702 6 років тому

    How I chose wight pls

  • @oozcan42
    @oozcan42 5 років тому

    5 minute enough. But 27:40. Are you alone?

  • @paulancajima
    @paulancajima 5 років тому +1

    thank you