Neural Network Representations (C1W3L02)

Поділитися
Вставка
  • Опубліковано 31 січ 2025

КОМЕНТАРІ • 7

  • @codewebsduh2667
    @codewebsduh2667 4 роки тому +1

    Good method to understand this is using tensor notations:
    let b(1,j,....,i) represent superscript 1 and all the following variables be subscripts on tensor b:
    Then a(1,i) = w(1,j,i)x(0,j) where x(0,j) is the jth input and w(1,j,i) is the j and ith input from weight 1.
    and a(2,0) = w(2,i)a(1,i)
    Thus: a(1,i) = w(1,j,i)x(0,j)
    and a(2,0) = w(2,i)a(1,i) represent everything we need to know.
    For dimensions of the nth layer weights all we have to do is know that:
    dimension of current weight = number of nodes in current layer X dimension of previous weight.

  • @EranM
    @EranM 6 років тому +20

    0:00 - 0:07 aliens overtake the Video.

  • @isaacgreen9495
    @isaacgreen9495 2 роки тому

    This was a helpful visual. Thanks!

  • @sheemakhan4738
    @sheemakhan4738 3 роки тому

    Thnx

  • @giuseppevaleriogramazio3594
    @giuseppevaleriogramazio3594 6 років тому +1

    Then can we say that the input layer has associated w and b with shapes, respectively, (1,3) and (1,1) and that they are fixed to, respectively, [1,1,1] and 0?

    • @mufeili4350
      @mufeili4350 6 років тому

      w is a matrix of shape (3,3) where all its elements on the diagonal are 1 and the rest are 0. This is also called a 3-dimensional identity matrix. b is a vector of shape (3,1) where all the elements are 0.

  • @dengpan8086
    @dengpan8086 7 років тому

    很好理解