Computing Neural Network Output (C1W3L03)

Поділитися
Вставка
  • Опубліковано 31 січ 2025

КОМЕНТАРІ • 27

  • @prismaticspace4566
    @prismaticspace4566 4 роки тому +5

    This is the most detailed explanation I’ve ever seen, reduced to the most intuitive parts.

  • @joostthissen8667
    @joostthissen8667 5 років тому +1

    Excellent explanation. Very clear.

  • @williamzheng5918
    @williamzheng5918 5 років тому +2

    Each seemingly identical neuron in a layer takes the same input differently, this is quite fascinating from a layman's view.

    • @sethpai
      @sethpai 3 роки тому

      Interesting insight, I think it is analogous to how two people (or two brains) can receive the same input (like see the same image) and process/respond differently, and this is due to the differences in connections and connection strengths of neurons between the two brains.
      Also, as discussed later in the course, the training of these networks is based on the fine tuning of those connections between neurons to achieve a goal. In neuroscience, a theory is that our brains also tune our neuron connections to achieve changes in behavior, form memories, etc. (known as plasticity), yet each neuron by itself at its core is the same.

  • @armitosmt5753
    @armitosmt5753 4 роки тому +2

    what are other elements in your matrix wT , you have a row vector that you only wrtote ___w1[1]T___ ,___w2[1]T___ , ..... , what are before and after each w? are those representing w's for other nodes?

  • @coder_extreme6389
    @coder_extreme6389 4 місяці тому

    All is Good but the multiplication between Weight matrix containing transpose of all weight of each level of Layer 1 sholud be wight a row vector of input feature instead of column vector of input feature ?

  • @divyachopra2369
    @divyachopra2369 Рік тому

    I have difficulty understanding what exactly are these W and B vectors and how do we determine it for each neuron in the hidden layer ?

    • @jambajuice07
      @jambajuice07 Рік тому +1

      w is weight and b is bais . we use gradient descent to get this values

  • @sandipansarkar9211
    @sandipansarkar9211 4 роки тому

    nice explanation

  • @partyplay2010
    @partyplay2010 5 років тому

    I thought W was a vector that is updated constantly based on the training and learning. Why do we have w1, w2 etc. operating at the same time on the same m training examples ?

    • @legacies9041
      @legacies9041 5 років тому

      In neural networks we have hidden layers, w is a vector only when you are in logistic regression due to having only one neuron/unit as an output. But here we have multiple units per layer. You can also use w as a vector if you choose but you will have to use for-loop which would be more computational expensive.

  • @chs2613
    @chs2613 4 роки тому

    if W[1] is a (4, 3) matrix, then wouldn't W[1]i be a (1,3) row vector, which is compatible with the (3, 1) column vector x or a[0].
    Why do we still need to transpose W[1]i to become a column vector?

    • @srujohn652
      @srujohn652 4 роки тому

      W[1]weights will be (3,4) and inputs (3,1). so you need to transpose W, wT*x. otherwise multiplication will not be valid

  • @acousticIndie
    @acousticIndie 4 роки тому

    W and B here represents weight and Bias ? in the eqn Wt*X+b

  • @iOSGamingDynasties
    @iOSGamingDynasties 4 роки тому

    What is W[2] and how is it calculated?

    • @anren7445
      @anren7445 3 роки тому

      it's the second weight vector, between layer[1] and layer[2]

  • @shrutigoyal6472
    @shrutigoyal6472 5 років тому +1

    why is W[1] is 4×3 matrix??? instead of 4×1matrix

    • @hedgehogist
      @hedgehogist 5 років тому +4

      Because there are 3 features: x1, x2, and x3, W[1] assigns a weight to each feature of the input.

  • @chitralalawat8106
    @chitralalawat8106 5 років тому +4

    Very difficult to understand.. :(

    • @canucoar
      @canucoar 5 років тому +2

      I thought it was just me :0)

    • @drexiya221
      @drexiya221 5 років тому +7

      Its not easy stuff, but this is probably the best explanation, by the best teacher that you are ever likely to come across :) he certainly helped my understanding..

    • @frankwxu
      @frankwxu 5 років тому

      He explained well.

    • @RealMcDudu
      @RealMcDudu 5 років тому +4

      if you finish the original ML course, and then move to this specialization, it will become much much easier :-) (sometimes the tortoise can outbeat the hare)

    • @legacies9041
      @legacies9041 5 років тому +4

      Please take the first machine learning course from him first and then come back here

  • @rp88imxoimxo27
    @rp88imxoimxo27 4 роки тому +2

    Too much easy to understand for my genius mind, now I need something much more harder like Transformers or CNN