Einsum Is All You Need: NumPy, PyTorch and TensorFlow

Поділитися
Вставка
  • Опубліковано 14 жов 2024

КОМЕНТАРІ • 61

  • @matt.jordan
    @matt.jordan 3 роки тому +43

    This is literally insane how well you explained this I instantly subbed you deserve so much more attention

  • @udbhavprasad3521
    @udbhavprasad3521 3 роки тому +10

    Honestly, there is no channel that even compares to this level of quality

  • @misabic1499
    @misabic1499 4 роки тому +8

    Hi. Your model building from scratch tutorials are really helpful. Eagerly waiting for more tutorials to come. I really appreciate it!

    • @AladdinPersson
      @AladdinPersson  4 роки тому

      I appreciate the kind words! Any video in particular that you thought were good and do you have any specific suggestions for the future?

    • @leonardmensahboante4308
      @leonardmensahboante4308 2 роки тому

      @@AladdinPersson Please do a video on python hooks, thus how to use pre-trained model as the encoder to the UNET architectures for image segmentation.

  • @qiguosun129
    @qiguosun129 3 роки тому +3

    This is literally the best and simplest explanation I ever had, thanks.

  • @johngrabner
    @johngrabner 4 роки тому +5

    Another perfect video. Most valuable because it provides a foundation for your other video. Can't wait for your next einsum video.

  • @rajanalexander4949
    @rajanalexander4949 Рік тому

    Excellent tutorial of a very useful but sometimes confusing feature in NumPy. I would only add that " . . . " is syntactic sugar for omitting a bunch of indices.

  • @mayankkamboj4025
    @mayankkamboj4025 9 місяців тому

    Wow, I finally get einsum ! Thank you so much. And that lotr reference was good.

  • @gauravmenghani4
    @gauravmenghani4 2 роки тому +1

    Lovely. I always found einsum non-intuitive. Learnt a lot! Thanks :)

  • @stacksmasherninja7266
    @stacksmasherninja7266 2 роки тому +1

    It almost felt like you implemented these functions yourself in those libraries ! Great video

  • @SantoshGupta-jn1wn
    @SantoshGupta-jn1wn 2 роки тому

    One of the most important videos I've ever seen.

  • @thecros1076
    @thecros1076 4 роки тому +3

    Learnt something new today❤️❤️, ...I always had a question how and were did you learn everything?

    • @AladdinPersson
      @AladdinPersson  4 роки тому +4

      I don't know all of this stuff. I research everything to try to make every video as good as I possible can so the process is usually that I learn something in depth and then decide to share it with you guys

    • @thecros1076
      @thecros1076 4 роки тому

      @@AladdinPersson ❤️❤️❤️loved all of your videos ... hardwork and talent is a deadly combination ....hope to see new project videos soon❤️

  • @CrazyProgrammer16
    @CrazyProgrammer16 Рік тому +1

    Hey, but why does "i,j->ij" also have a product??? Here in the input nothing is repeating. Are there other rules?

  • @iskrabesamrtna
    @iskrabesamrtna 2 роки тому

    i had to translate it to tensorflow :) very useful video for practice. thank you!

  • @kenzhebektaniyev8180
    @kenzhebektaniyev8180 Рік тому

    cool! tbh I didn't believe you could explain it but you did

  • @valeriusevanligasetiawan6967
    @valeriusevanligasetiawan6967 11 місяців тому

    This is great, I just wanna know however, if I can do FFT of Green function using einsum. Note: been trying for a week to implement the code, never got the correct result.

  • @rekeshwardhanani920
    @rekeshwardhanani920 Рік тому

    Insane brother, excellent just excellent

  • @haideralishuvo4781
    @haideralishuvo4781 4 роки тому +2

    Awesome , Your channel is so underrated . Was struggling for a good channel to learn about pytorch ,Thanksfully got yours :D Can you cover pix2pix , cycleGAN , RCNN's ? Would be greatful if you do .

    • @AladdinPersson
      @AladdinPersson  4 роки тому

      Appreciate you 👊 Many people have requested that so it's coming but can't promise when :)

  • @iskhwa
    @iskhwa 2 роки тому

    Thanks, a perfect explanation.

  • @francesco_savi
    @francesco_savi 4 роки тому

    nice explanation, very clear! thanks!

  • @minma02262
    @minma02262 3 роки тому

    Thank you for sharing this!

  • @javidhesenov7611
    @javidhesenov7611 Рік тому

    thanks for awesome explanation

  • @deoabhijit5935
    @deoabhijit5935 3 роки тому

    are you considering doing an another video on advanced einsum?

  • @leofh1917
    @leofh1917 3 роки тому

    Thanx! This one is very useful!

  • @michaelmoran9020
    @michaelmoran9020 3 роки тому

    Are the "free indicies" part of standard einstein notation or something made up to allow you to exclude array dimensions from the einsum entirely?

  • @parasharchatterjee3223
    @parasharchatterjee3223 2 роки тому

    It's the Einstein summation convention that's used in physics very commonly, and just removes the clunky summation sign in pages long calculations!

  • @johnj7883
    @johnj7883 4 роки тому

    Thanks a lot. it saves my day

  • @Han-ve8uh
    @Han-ve8uh 2 роки тому +1

    One thing that wasn't mentioned in the video that i realized halfway through is sometimes einsum is used on 1 operand while sometimes on 2. I tried "torch.einsum('ii->i', t,t)" and got "RuntimeError: einsum(): more operands were provided than specified in the equation". This tells me that the number of operands must correspond to the number of comma separated indexes on left hand side of ->.

  • @danyalziakhan
    @danyalziakhan 3 роки тому

    So, basically einsum is the DSL that is shared between these libraries, right?

  • @Raghhuveer
    @Raghhuveer 2 роки тому

    How does it compare in terms of performance and efficiency to standard numpy function calls?

  • @cassenav
    @cassenav 2 роки тому

    Great video thanks :)

  • @epolat19
    @epolat19 3 роки тому

    Does einsum mess the auto-differentiation of TensorFlow

  • @jamgplus334
    @jamgplus334 3 роки тому

    nicely done

  • @iskrabesamrtna
    @iskrabesamrtna 2 роки тому

    einsum to rule them all, indeed.

  • @ALVONIUM
    @ALVONIUM Рік тому

    Helt otroligt

  • @iskrabesamrtna
    @iskrabesamrtna 2 роки тому

    gives me error while matrix-vector multipication:
    torch.einsum("ij, kj->ik", x, v)
    einsum(): operands do not broadcast with remapped shapes [original->remapped]: [2, 5]->[2, 1, 5] [1, 3]->[1, 1, 3]
    same in tf
    Expected dimension 5 at axis 1 of the input shaped [1,3] but got dimension 3 [Op:Einsum]

  • @rockapedra1130
    @rockapedra1130 3 роки тому

    Very cool!

  • @Choiuksu
    @Choiuksu 4 роки тому

    What a nice video !

  • @alfahimmohammad
    @alfahimmohammad 3 роки тому

    will einsen work for model parallelism in keras models?

    • @AladdinPersson
      @AladdinPersson  3 роки тому

      I haven't tried that but I would imagine that it works

    • @alfahimmohammad
      @alfahimmohammad 3 роки тому

      @@AladdinPersson I tried it. It wasn't good. I was better off with manually assigning each layer to each GPU in pytorch

  • @jeanchristophe15
    @jeanchristophe15 4 роки тому

    I am not sure the "Batch matrix multiplication" example is correct, because i is used twice.

  • @MorisonMs
    @MorisonMs 3 роки тому +1

    3:37 (Outer product) there is no need to sum, simply M[i,j] = A[i,k]*B[k,j]

    • @lewis2865
      @lewis2865 3 роки тому +1

      It's matrix multiplication

  • @gtg238s
    @gtg238s 4 роки тому

    Great explanation! click

  • @AlbertMunda
    @AlbertMunda 4 роки тому

    awesome

  • @ripsirwin1
    @ripsirwin1 3 роки тому +1

    This is so difficult to understand I don't know if I'll ever get it.

    • @AladdinPersson
      @AladdinPersson  3 роки тому

      Sorry, maybe I didn't explain it good enough:/

    • @ripsirwin1
      @ripsirwin1 3 роки тому +1

      @@AladdinPersson no you're great. I just have to work at it

  • @SAINIVEDH
    @SAINIVEDH 3 роки тому

    can someone explain how matrix diagonal is "ii->i" ?

    • @ericmink
      @ericmink 3 роки тому

      I think it's because if you wrote it as a nested loop, then you would loop over all rows with a variable `i`, and for the columns you would reuse the same variable (every entry at coordinates (i,i) is on the diagonal). Now for the result, if you left the `i` out it would sum the diagonal elements up. If you have it in there, it will create a list instead.

  • @AndyLee-xq8wq
    @AndyLee-xq8wq 7 місяців тому

    cool

  • @hieunguyentrung8987
    @hieunguyentrung8987 3 роки тому

    np.einsum('ik,kj->ij', x,y) is actually much much slower than np.dot(x,y) when the matrix size gets larger
    Also tf.einsum is slightly slower than tf.matmul but torch.einsum is slightly faster than torch.matmul...
    Only from a perspective of the configuration of my laptop though