Word Embeddings: Word2Vec
Вставка
- Опубліковано 9 лют 2025
- Word2Vec is a groundbreaking technique that transforms words into numerical vectors, capturing semantic relationships in language.
This video explores:
How Word2Vec works to create meaningful word representations
Practical applications in NLP and machine learning
Some limitations of Word2Vec
Hex is a collaborative workspace for exploratory analytics and data science. With Hex, teams can quickly reach insights in AI-powered notebooks using SQL, Python, & no-code, and instantly share their work with anyone.
Links
------------------
Visit our website: hex.tech/
Stay connected on twitter - / _hex_tech
Stay connected on LinkedIn - / mycompany
One of the best explanations of word2vec!
Top notch explanation with amazing animations!!
Appreciate it 🙏🏾
it was brillinat! I enjoyed and deeply understood the word2vec concept from your content. plz keep up the brilliant work 🥰🥰🥰
New Achievement Unlocked: Found another awesome channel to subscribe and watch grow 🌟🌟
This really was a high quality video thank you
Loving the motion graphics!
Keep going what are you doing my friend. i'll always be here supporting you.
What a great video, Loved it ❤
Fantastic breakdown
Too the point and Simple. Thanks a lot.
Do you mind sharing tools used to make this beautiful piece of art? Looking to learn making videos and share with students.
🙏🏾. My tools are just adobe premiere, Hex, and notion
what the fug , did this awesome video just popped up in my algorithm ?
😏
Subbing and commenting and liking to boost algorithm
🤝
Cool vid! What is that tool to do the word analogies and visualizations?
It's all done in Hex hex.tech/
But how does the loss function work if the model doesn t know what is correct. And we humans could not judge the loss factually
Think I understood: the model compares the probability of these worlds showing up together in other texts. Am I right? Thanks for this great video
The loss function learns from how words naturally appear together in text. It doesn't need an absolute "correct" answer - instead, it measures how well the model predicts actual word co-occurrences in the training data. If words like "cat" and "drinks" frequently appear near each other, the model learns to expect this pattern, and gets penalized when it predicts unrelated words.