Book Recommendation System in Python with LLMs

Поділитися
Вставка
  • Опубліковано 31 гру 2024

КОМЕНТАРІ • 17

  • @thomasgoodwin2648
    @thomasgoodwin2648 5 місяців тому +6

    Just an odd thought/observation... we always seem to choose the closest objects as part of the recommendor, but wouldn't it be useful to also include the farthest as counterpoints? We tend to pick what's comfortable, but sometimes it's good to get an opposing POV.
    🖖😺👍

  • @parsagholampour
    @parsagholampour 4 місяці тому

    Thanks a million times for this precious contents. 💚

  • @ulfrottger9171
    @ulfrottger9171 26 днів тому

    thanks a lot! absolutely inspiring

  • @peterwassmuth4014
    @peterwassmuth4014 5 місяців тому +1

    Awesome Thank you for Sharing 💯✴

  • @ChristopherBruns-o7o
    @ChristopherBruns-o7o 5 місяців тому +5

    i understand embedding but binary search is actually not AI - like we could do the same with just numpy and pandas, right?

    • @NeuralNine
      @NeuralNine  5 місяців тому +1

      Yeah the AI part, which requires intelligence is the embedding. The search is just a bunch of distance calculations.

  • @yuehpo-peng
    @yuehpo-peng Місяць тому

    Thanks for the great video! I have a couple of questions:
    1. Are LLMs currently practical for use as recommender systems in the industry, or are other deep learning methods like reinforcement learning (RL) more commonly applied?
    2. Extracting item embeddings with LLMs seems quite time-consuming. Could this make them less applicable in real-world scenarios? Would it be more efficient to extract these embeddings offline instead?

  • @arek7198
    @arek7198 5 місяців тому +1

    Thank you very much for your program

  • @dipeshsamrawat7957
    @dipeshsamrawat7957 5 місяців тому +1

    Thank you 😊

  • @andiglazkov4915
    @andiglazkov4915 5 місяців тому +1

    Thanks 😊

  • @kshitijnishant4968
    @kshitijnishant4968 4 місяці тому

    In one comment you said that "the AI part, which requires intelligence is the embedding and the search is just a bunch of distance calculations". But the search is also happening using the Llama2 model itself here, right?

    • @heroe1486
      @heroe1486 4 місяці тому

      No, the search is just calculations and can be done by hand or with the help of any vector database, it doesn't involve the model

  • @SOHAMSHRIRAM
    @SOHAMSHRIRAM 2 місяці тому

    what if we do this on google colab ? it wont give the issue for system requirements right?

  • @kshitijnishant4968
    @kshitijnishant4968 4 місяці тому +1

    No link to dataset down below :')

  • @kadbed
    @kadbed 4 місяці тому +2

    For those wondering how long it would take to run the entire dataset on a 2019 8 GB Intel Macbook Pro, it took me 2478 minutes 🥶

    • @Amina-jb6mh
      @Amina-jb6mh 3 місяці тому

      do you have to di this once or every time we have a request?

  • @jorgemolto6862
    @jorgemolto6862 5 місяців тому

    eres el mejor