Tutorial 116 - The difference between upsampling2D and conv2Dtranspose layers in deep learning

Поділитися
Вставка
  • Опубліковано 2 лют 2025

КОМЕНТАРІ • 8

  • @jurischaber6935
    @jurischaber6935 3 роки тому +2

    Thanks, Dr. Bhattiprolu,
    you did an amazing job.
    Jurij Schaber

  • @saraabasi66
    @saraabasi66 3 роки тому +1

    Thanks for your nice videos

  • @SimonFlyvbjergNrrelykke101
    @SimonFlyvbjergNrrelykke101 3 роки тому

    Nice and clear. Thank you.
    Question: @15:30 I am a little confused about the value of the weight: Shouldn't it be the same multiplicative factor i.e. 0.965 in this case, for all entries in the transposed matrix, and not "0.8 something" for the (0,2) location?
    The math works out that way, with non-zero entries being 1x0.965=0.965; 2x0.965=1.931; 3x0.965=2.895, etc (bias=0) and agrees with the earlier statement that there are only two parameters to learn (unclear why bias would be zero though).
    Cheers.

  • @edmald1978
    @edmald1978 3 роки тому +1

    Great video !!!! thanks a lot

  • @jesusantoniovillegas7086
    @jesusantoniovillegas7086 3 роки тому +1

    Thanks for teaching us. I would like to know if I could email you, Dr. Sreenivas Bhattiprolu. Have a great day.

  • @tilkesh
    @tilkesh 2 роки тому

    Thank you very much.

  • @lalitsingh5150
    @lalitsingh5150 3 роки тому +1

    Sir is this your second channel?