Text Embeddings, Classification, and Semantic Search (w/ Python Code)

Поділитися
Вставка
  • Опубліковано 25 гру 2024

КОМЕНТАРІ • 70

  • @ShawhinTalebi
    @ShawhinTalebi  9 місяців тому +2

    👉More on LLMs: ua-cam.com/play/PLz-ep5RbHosU2hnz5ejezwaYpdMutMVB0.html
    --
    References
    [1] ua-cam.com/video/A8HEPBdKVMA/v-deo.htmlsi=PA4kCnfgd3nx24LR
    [2] R. Patil, S. Boit, V. Gudivada and J. Nandigam, “A Survey of Text Representation and Embedding Techniques in NLP,” in IEEE Access, vol. 11, pp. 36120-36146, 2023, doi: 10.1109/ACCESS.2023.3266377.
    [3] owasp.org/www-project-top-10-for-large-language-model-applications/

  • @ccapp3389
    @ccapp3389 8 місяців тому +30

    Love that you’re bringing real knowledge, insights and code here! So many AI UA-camrs are just clickbaiting their way through the hype cycle by reading the same SHOCKING news as everyone else.

    • @tylerpoore97
      @tylerpoore97 8 місяців тому

      I mean, the guy clickbaited the thumbnail. Also, this is insanely old news at this point(if considered news at all).
      Video content was on point, but we shouldn't be promoting clickbait methods.

    • @ccapp3389
      @ccapp3389 8 місяців тому +3

      I clicked this video for technical explanations and code, not news. There are plenty of dudes reading off the same SHOCKING news across AI UA-cam. I got exactly what I wanted from this video and feel like the title was clear.

  • @krishnavamsiyerrapatruni5385
    @krishnavamsiyerrapatruni5385 7 місяців тому +6

    I have learnt so much by watching the entire series. Thank you so much Shaw! I think this is one of the best playlists out there for anyone looking to get into the field of LLMs and GenAI.

    • @ShawhinTalebi
      @ShawhinTalebi  7 місяців тому

      Great to hear! Feel free to share any suggestions for future content :)

  • @BrandonFoltz
    @BrandonFoltz 8 місяців тому +5

    Great video. The practical use cases for embeddings themselves are undervalued IMHO and this video is fantastic for showing ways to use embeddings. Even if you use OpenAI embeddings, they are dirt cheap, and can provide fantastic vectors for further analysis, manipulation, and comparison.

    • @ShawhinTalebi
      @ShawhinTalebi  8 місяців тому

      Thanks Brandon! I completely agree. Agents are great, but they seem to overshadow all the relatively simple text embedding-based applications.

  • @youngzproduction7498
    @youngzproduction7498 4 місяці тому +1

    I love how you give a low level lesson. It helps me understand more in the topic and also see more potential on applying in another area. Long story short, you got a new subscriber. I will consume all your knowledge and make the best out of it.

    • @ShawhinTalebi
      @ShawhinTalebi  4 місяці тому

      Thanks for subscribing :) Glad it was helpful!

  • @aldotanca9430
    @aldotanca9430 8 місяців тому +1

    Exceptionally clear as always!

  • @pramodkumarsola
    @pramodkumarsola 8 місяців тому +2

    You are the real guy to subscribe and learn

  • @obaydmir8353
    @obaydmir8353 8 місяців тому +1

    Clear and understandable explanation of these concepts. Thanks and really enjoyed!

  • @LouvoresPauloRicardo
    @LouvoresPauloRicardo 8 місяців тому +1

    Congrats man! Keep going with more real examples with code sharing

  • @banoffanimations5704
    @banoffanimations5704 4 місяці тому

    Hi Shaw!!! Really great stuff... I am loving this series!!! I echo with everyone else agreeing that your videos are super informative and hands on!!! Very Very Useful!!! Many thanks man!

  • @ethanlazuk
    @ethanlazuk 8 місяців тому

    SEO here, enjoyed your examples of semantic search and explanation of hybrid search. Great vid and easy to follow. Will explore your channel. Cheers!

  • @АнтонБ-х9у
    @АнтонБ-х9у 2 дні тому

    Very helpful. 🎉

  • @enmutlu-c4j
    @enmutlu-c4j 4 місяці тому

    great video! super clear and on point. Thanks Shaw!

  • @ifycadeau
    @ifycadeau 9 місяців тому +1

    Wow! Thank you for breaking this down, been trying to figure it out!

  • @blackswann9555
    @blackswann9555 8 місяців тому

    Excellent work sir! ❤

  • @greatwall2003
    @greatwall2003 6 місяців тому

    Thanks, useful material 👍

  • @databasemadness
    @databasemadness 8 місяців тому

    Love you shaw!

  • @jamespeters1617
    @jamespeters1617 2 місяці тому

    Great info

  • @avi7278
    @avi7278 8 місяців тому

    Great format subd

  • @dr.aravindacvnmamit3770
    @dr.aravindacvnmamit3770 8 місяців тому

    Excellent!

  • @uzairmalik7084
    @uzairmalik7084 5 місяців тому

    Hey Shaw thanks for this wonderful series. I have completed it and learned so many new things but one thing I felt is that the code is very high level and it feels like to me that I have to remember most of the things during coding while practicing with those hugging face models. Do you have any suggestions for that?

    • @ShawhinTalebi
      @ShawhinTalebi  5 місяців тому

      I think the best way to solidify your understanding is to apply it to real-world use cases.

  • @eliskucevic340
    @eliskucevic340 8 місяців тому

    Iv been using embeddings for awhile but i find that agents can call specialized tools that can be very useful depending on the applications.

    • @ShawhinTalebi
      @ShawhinTalebi  8 місяців тому

      Thanks for sharing your insight! Indeed agents and embeddings solve different problems. However, some agent use cases could be reconfigured to be solved with text embeddings + human in the loop.

  • @KrisTC
    @KrisTC 8 місяців тому +3

    I have watched most of the videos in this series and found them really helpful. Something I am looking for that I haven't seen you cover yet. Is some more guidance on preparing data for either RAG or fine tuning. I am sure you have practical tips you can give. I have a large old codebase, we have loads of documentation and tutorials etc, but it is a lot of someone to pickup. This new world of GPTs seams perfect for building an assistant. I will be able to work through it ok, but I suspect there will be a load of learnt best practices or pitfalls to avoid that are a bit more subtle. For example I am looking through our support emails / tickets, lots of them all start with please send logs :) and after a load of back and forth we have info. This is much like a conversation with ChatGPT. For fine tuning is it best to fine tune on a whole thread? Or each chunk of the conversation?

    • @ShawhinTalebi
      @ShawhinTalebi  8 місяців тому +3

      Great suggestion! I plan to do a series on data engineering and this would be a great thing to incorporate into it.
      For you use case, the best choice would depend on what you want the assistant to do. For instance, if you want the assistant to mimic the support rep, then you'd likely want to use each message in the thread with its appropriate context (i.e. preceding messages).

    • @KrisTC
      @KrisTC 8 місяців тому

      @@ShawhinTalebi thanks for the tip. That’s what I ended up doing it. Not yet tried actually fine tuning yet. Just finished my data prep. Looking forward to you next series 😊

  • @superfreiheit1
    @superfreiheit1 4 місяці тому

    Awesome teaching quality. Simple start into text embeddings for begineers. But it would be better to use a Open Source LLM to create embeddings. OpenAI api is for paid.

    • @ShawhinTalebi
      @ShawhinTalebi  4 місяці тому

      Thanks :)
      I use open source embeddings in my latest video: ua-cam.com/video/3JsgtpX_rpU/v-deo.htmlsi=ricuwaoSJSYnSAQM&t=843

  • @giantbush4258
    @giantbush4258 4 дні тому

    Awesome

  • @cinematicsounds
    @cinematicsounds 8 місяців тому

    Thank you very good information, will try to make a database for audio sound effects using vector databases text to audio

  • @hoseinsalahshoor635
    @hoseinsalahshoor635 2 місяці тому

    Thank you for your useful video. I have a question regarding openai embedding model. Does openai fine-tune its model if we use these (embedding) models? ... My data is private and I don't want to expose it. Thanks

    • @ShawhinTalebi
      @ShawhinTalebi  2 місяці тому

      OpenAI's privacy policy says they do not train on API data: openai.com/enterprise-privacy/

  • @pepeballesteros9488
    @pepeballesteros9488 8 місяців тому

    Many thanks for the video Shaw, great content!
    One simple question: when using OpenAI's embedding model, each resume is represented by an embedding vector. Is this embedding computed as the average of all word vectors?

    • @ShawhinTalebi
      @ShawhinTalebi  8 місяців тому

      Great question! Embedding models do not operate on specific words, but rather on the text as a whole. This is valuable because the meaning of specific words is driven by the context it appears in.

  • @sherpya
    @sherpya 9 місяців тому

    it's possible to extract software names from the query with a text classifier and apply only e. g. apache airflow to kw search? also what db do you suggest? is postgres with vector db good?

    • @ShawhinTalebi
      @ShawhinTalebi  8 місяців тому

      Good question. While I haven't seen a text classifier used for KW search, that could be a clever way to implement it.
      There are several DBs to choose from these days. I'd say go with what makes sense with the existing data infrastructure. If starting from scratch, Elastic search or Pinecone might be good jumping off points.

    • @aldotanca9430
      @aldotanca9430 8 місяців тому

      lanceDB is also quite good.

  • @avi7278
    @avi7278 8 місяців тому

    finally someone who speaks with their hands more than I do, lol...

    • @ShawhinTalebi
      @ShawhinTalebi  8 місяців тому

      😂😂.. 👋 👍

    • @toddai2721
      @toddai2721 7 місяців тому

      I call him the hand whisperer.... but really loud.

  • @AlexandreMarr-uq8pw
    @AlexandreMarr-uq8pw 6 місяців тому

    Can only two kinds of classification be made? If I have lots of types, for example, product classification, can it be applied?

    • @ShawhinTalebi
      @ShawhinTalebi  6 місяців тому

      You can have several target classes. Here's a nice write-up about doing that with sklearn: scikit-learn.org/stable/modules/multiclass.html

  • @tamilinfomite
    @tamilinfomite 8 місяців тому

    Hi Shawhin, Thanks. I ran into a problem. I tried to use Sentence_transformers model by installing it. It always givens an error no file found config_sentence_transformers.json' in the .cache/huggingface/... folder. Your help is appreciated

    • @ShawhinTalebi
      @ShawhinTalebi  8 місяців тому

      Not sure what the issue could be. Did you install all the requirements on the GitHub?
      github.com/ShawhinT/UA-cam-Blog/tree/main/LLMs/text-embeddings

  • @alroygama6166
    @alroygama6166 7 місяців тому

    Can i use these embeddings with bert based models instead?

    • @ShawhinTalebi
      @ShawhinTalebi  7 місяців тому

      Yes! In fact, sentence transformers has a few bert-based embedding models: sbert.net/docs/pretrained_models.html

  • @Whysicist
    @Whysicist 8 місяців тому

    LDA - Latent Dirichlet Allocation is kinda trivial these days… Matlab text analytics toolbox works great on pdf’s with bi-grams… a la bag-of-N-Grams. Cool… thanks…

  • @chamaljayasinghe4210
    @chamaljayasinghe4210 7 місяців тому

    ✌✌🧑‍💻🧑‍💻

  • @skarloti
    @skarloti 8 місяців тому

    This is not always a good solution if we have multilingual text. I see that LLM context 1M token/character They offer other solutions with functions and external API calls.

    • @ShawhinTalebi
      @ShawhinTalebi  8 місяців тому

      I'm curious about this. I've seen embedding models that can handle multiple languages, so I'd expect them to work pretty well. Can you shed any more light on this?

  • @enestemel9490
    @enestemel9490 Місяць тому

    It's not very good practice to compare ai assistants and text embeddings since ai assistants are also working with tokens ( which are the embeddings for each text chunk) .

  • @tylerpoore97
    @tylerpoore97 8 місяців тому

    Soo, unlike your thumbnail, this has nothing to do with agents...
    Why mention them?

    • @ShawhinTalebi
      @ShawhinTalebi  8 місяців тому

      Thumbnail is "Forget AI agents... use this instead". I explain this a bit @3:15.

  • @cirtey29
    @cirtey29 7 місяців тому

    By end of next year all the drawbacks of LLMs will be erased.

  • @onjajaboy
    @onjajaboy 8 місяців тому

    are you persian

  • @AndresSolar-y3g
    @AndresSolar-y3g 7 місяців тому

    cool...

  • @bentobin9606
    @bentobin9606 8 місяців тому

    is text embedding same as text tokenization done in training ?

    • @ShawhinTalebi
      @ShawhinTalebi  8 місяців тому +2

      Good question! These are different things.
      Tokenization is the process of taking a some text and deriving a vocabulary from which the original text can be generated, where each element in the vocabulary is assigned a unique integer value.
      Text embeddings on the other hand, take tokens and translate them into meaningful (numerical) representations.
      I talk a little more about tokenization here: ua-cam.com/video/czvVibB2lRA/v-deo.htmlsi=FwqmkB9Ltyq45n0w&t=348