Just an odd thought/observation... we always seem to choose the closest objects as part of the recommendor, but wouldn't it be useful to also include the farthest as counterpoints? We tend to pick what's comfortable, but sometimes it's good to get an opposing POV. 🖖😺👍
Thanks for the great video! I have a couple of questions: 1. Are LLMs currently practical for use as recommender systems in the industry, or are other deep learning methods like reinforcement learning (RL) more commonly applied? 2. Extracting item embeddings with LLMs seems quite time-consuming. Could this make them less applicable in real-world scenarios? Would it be more efficient to extract these embeddings offline instead?
In one comment you said that "the AI part, which requires intelligence is the embedding and the search is just a bunch of distance calculations". But the search is also happening using the Llama2 model itself here, right?
Just an odd thought/observation... we always seem to choose the closest objects as part of the recommendor, but wouldn't it be useful to also include the farthest as counterpoints? We tend to pick what's comfortable, but sometimes it's good to get an opposing POV.
🖖😺👍
Thanks a million times for this precious contents. 💚
thanks a lot! absolutely inspiring
Awesome Thank you for Sharing 💯✴
i understand embedding but binary search is actually not AI - like we could do the same with just numpy and pandas, right?
Yeah the AI part, which requires intelligence is the embedding. The search is just a bunch of distance calculations.
Thanks for the great video! I have a couple of questions:
1. Are LLMs currently practical for use as recommender systems in the industry, or are other deep learning methods like reinforcement learning (RL) more commonly applied?
2. Extracting item embeddings with LLMs seems quite time-consuming. Could this make them less applicable in real-world scenarios? Would it be more efficient to extract these embeddings offline instead?
Thank you very much for your program
Thank you 😊
Thanks 😊
In one comment you said that "the AI part, which requires intelligence is the embedding and the search is just a bunch of distance calculations". But the search is also happening using the Llama2 model itself here, right?
No, the search is just calculations and can be done by hand or with the help of any vector database, it doesn't involve the model
what if we do this on google colab ? it wont give the issue for system requirements right?
No link to dataset down below :')
For those wondering how long it would take to run the entire dataset on a 2019 8 GB Intel Macbook Pro, it took me 2478 minutes 🥶
do you have to di this once or every time we have a request?
eres el mejor