Word Embeddings: Word2Vec

Поділитися
Вставка
  • Опубліковано 7 січ 2025

КОМЕНТАРІ • 19

  • @nayanradadiya3288
    @nayanradadiya3288 3 місяці тому +7

    Top notch explanation with amazing animations!!

    • @_hex_tech
      @_hex_tech  3 місяці тому

      Appreciate it 🙏🏾

  • @infomaax_yt
    @infomaax_yt 3 місяці тому +4

    New Achievement Unlocked: Found another awesome channel to subscribe and watch grow 🌟🌟

  • @tmjthabethe
    @tmjthabethe 3 місяці тому +1

    Loving the motion graphics!

  • @IbrahimSowunmi
    @IbrahimSowunmi 3 місяці тому

    Fantastic breakdown

  • @geforex28
    @geforex28 3 місяці тому +1

    This really was a high quality video thank you

  • @harrydawitch
    @harrydawitch 3 місяці тому +1

    Keep going what are you doing my friend. i'll always be here supporting you.

  • @sankhuz
    @sankhuz 3 місяці тому

    What a great video, Loved it ❤

  • @gorangagrawal
    @gorangagrawal 3 місяці тому +1

    Too the point and Simple. Thanks a lot.
    Do you mind sharing tools used to make this beautiful piece of art? Looking to learn making videos and share with students.

    • @_hex_tech
      @_hex_tech  3 місяці тому

      🙏🏾. My tools are just adobe premiere, Hex, and notion

  • @billbond2682
    @billbond2682 3 місяці тому

    what the fug , did this awesome video just popped up in my algorithm ?

  • @crystalmuaz
    @crystalmuaz 3 місяці тому

    Subbing and commenting and liking to boost algorithm

  • @pablosanzo5584
    @pablosanzo5584 Місяць тому

    Cool vid! What is that tool to do the word analogies and visualizations?

    • @_hex_tech
      @_hex_tech  Місяць тому

      It's all done in Hex hex.tech/

  • @MatheoDampfer-nl3no
    @MatheoDampfer-nl3no 2 місяці тому

    But how does the loss function work if the model doesn t know what is correct. And we humans could not judge the loss factually

    • @MatheoDampfer-nl3no
      @MatheoDampfer-nl3no 2 місяці тому

      Think I understood: the model compares the probability of these worlds showing up together in other texts. Am I right? Thanks for this great video

    • @_hex_tech
      @_hex_tech  2 місяці тому +1

      The loss function learns from how words naturally appear together in text. It doesn't need an absolute "correct" answer - instead, it measures how well the model predicts actual word co-occurrences in the training data. If words like "cat" and "drinks" frequently appear near each other, the model learns to expect this pattern, and gets penalized when it predicts unrelated words.