MIT 6.S191 (2019): Visualization for Machine Learning (Google Brain)

Поділитися
Вставка
  • Опубліковано 21 гру 2024

КОМЕНТАРІ • 23

  • @ThomasCzerniawski
    @ThomasCzerniawski 5 років тому +28

    The multi-lingual embedding space is blowing my mind

  • @Xraid32
    @Xraid32 5 років тому +11

    Watching this after "Connections between physics and deep learning" by Max Tegmark is interesting (2016). He focuses a lot on things like locality that exist in the natural world we live in and neural networks. It is a very "aHa!" moment to see the languages behaving in such a local way in the embedding projector. I think that's very neat.

  • @julian_hesse
    @julian_hesse 3 роки тому +2

    This is such important work!!
    With visualizations like these we can begin to understand Neural Networks!!

  • @constcut
    @constcut 3 роки тому +2

    This is very inspiring video for me: in 2018 I was trying to develop the abstract model for emotion recognition from text, I called it 'semantic melodies' in connection with text tonalities. It was very poor, but at the same time I found that some words like spirit/inspiration/respiration are connected between at least English and Russian in the same way. Points shown in this video could help to develop those ideas further! I've saw few videos of words embeddings already, but this gives much greater taste. Thank you so much, I'm very grateful this channel exists, please keep your work going, I would be watching every new one video!

  • @ArefinSami
    @ArefinSami 3 роки тому +2

    She knows how to talk and present.

  • @sandeepmandrawadkar9133
    @sandeepmandrawadkar9133 5 років тому +4

    Out of imagination concept! Moving towards Natural Language...

  • @shivamraisharma1474
    @shivamraisharma1474 4 роки тому +2

    What an amazing lecture and presentation 👏

  • @nintishia
    @nintishia 3 роки тому +2

    This lecture is a gem, and I'm left scratching my head as to why it has just a thousand likes; and seventeen dislikes -- really? About word vector visualizations: just wondering if it even makes sense to try to get rid of biases in word vectors without curating every single bit of the corpus? Is there some way out of this?

  • @runggp
    @runggp 5 років тому +7

    awesome talk! amazing tools!

  • @ArunKumar-bp5lo
    @ArunKumar-bp5lo Рік тому

    So much insights

  • @ArefinSami
    @ArefinSami 3 роки тому +1

    Mind blowing.

  • @sator666666
    @sator666666 4 роки тому +1

    Great topic! Great talk!

  • @bokkieyeung504
    @bokkieyeung504 5 років тому

    sometimes there is no one-on-one mapping between 2 different languages, e.g., there is just no a corresponding word in language A for a to-be-translated word in language B, how will the computer deal with such case?

  • @Mrtlzjay
    @Mrtlzjay 5 років тому +4

    Mistake in cifar10, impressive !!!

  • @zt0t0s
    @zt0t0s 4 роки тому

    Is it a graduate or undergraduate course ?

  • @markus-sagen
    @markus-sagen 5 років тому +1

    Great!

  • @devarsh099
    @devarsh099 4 роки тому +2

    Wow

  • @pviccci
    @pviccci 5 років тому +1

    thnx!

  • @inamurrehman5785
    @inamurrehman5785 3 роки тому +1

    woa

  • @furkanturan7607
    @furkanturan7607 3 роки тому

    Basimiza gelmedik bela kalmicak bunlar yüzünden :)

  • @9Xaero
    @9Xaero 5 років тому +3

  • @pervezbhan1708
    @pervezbhan1708 3 роки тому

    easy to make fool