🔥Join the AI Engineer Bootcamp: Hey there.. The second edition of the AI Engineering Cohort is starting soon. - Learn with step-by-step lessons and exercises - Join a community of like-minded and amazing people - I'll be there to personally answer all your questions 🤓 - The spots are limited since I'll be directly interacting with you You can join the waitlist now 👉 course.alejandro-ao.com/ Cheers!
Thank you so much man, just yesterday I was struggling with the same thing, there is no recent content on llamaindex, everything is outdated. This is a live-saver, please continue this series.
Am considering using llamaindex for my ai application project after viewing your wonderfully done video, which is straightforward, simple and absolutely understandable. I am expecting a follow-up video by you on how to deploy llamaindex online for a realistic entrepreneur setting.
Works Very Well and very low cost (in pennies per PDF). Further savings by 'Persisting the Index' ... Thank You Yet Again! I own you a whole pot of coffee ! 😄
Merci beaucoup pour ce contenu de qualité (comme toutes tes vidéos), vivement la suite ! (j'espère que tu aborderas la création de RAG basé sur des agents)
i like to learn llamaindex but i wonder if i will just be spreading myself too thin by trying to master both langchain and llamaindex. do you have any advice?
Their documentation is lacking, so thank you for this. Question: In your code editor, I noticed the hover-over text: "start coding or generate with AI". What code generator service and/or plugin are you using, if you don't mind me asking? 😊 (E.g. GitHub Copilot, etc). Thank you. Edit: Ah, it's probably whatever CoLab offers. I was too focused on the LlamaIndex talk to notice the IDE was CoLab. LoL
Love your videos!!! Great content here again. One question: in 30:20 where the index gets created locally, what do the subfolders look like? "image_vector_store", "graph_store"... - does this mean the dataloaders would also split a PDF in plain text, graphs, images and then store the respectivbe embeddings in separate folders? Tried it on my own PDFs but could not make much sense of the index files unfortunately...
Great video! Keep em coming! Quick question. When you load documents can it get the documents recursively inside data? Like if there are more folders inside folders? Is there any limit to loading documents? Any aditional advice on the loading documents? What if a document has many pages and it has a footer and a header with repetitive content? Could that affect negatively the retrieval?
Great video. So if I understand correctly, the code example only shows the parsing into documents. So no nodes, embedding/vectorising and persistent storage in a vector DB? Any observations on weak/ strengths in comparison with langchain? The parts upto vector db and the parts from user upto vector DB
what 's the different between (llamaindex for chatbot creation) and (langchain +streamlit ...for pdf bot(the video you did last time) which of them is more suitable if I want to create a chatbot for a company
there is a bit of overlap between the two. but you really can't go wrong with either of them. they are both very reliable and have a great community. it seems to me that llamaindex is focusing a lot more on the data ingestion side and langchain is going more for them overall orchestration of the components. at least for now. the good news is that you can use both :) most of the paradigms are compatible, so you can take advantage of the strengths of each one. in the meantime, i recommend you focus on one of the two and then start implementing features from the other one as needed. you will soon get the core concepts and be able to choose which one is better suited fr a specific project 👍
thanks mate! i'm pretty sure both of us took it from the official docs, tbh 😅 but yeah, i wanted to give a more wholesome presentation of all their offer, not only the open-source part :)
hello there, absolutely. for this particular example, you can just add the model param to the query engine: ```python from llama_index.core import VectorStoreIndex, SimpleDirectoryReader from llama_index.llms.openai import OpenAI documents = SimpleDirectoryReader("data").load_data() index = VectorStoreIndex.from_documents(documents) query_engine = index.as_query_engine(llm=OpenAI(model="gpt-4o-mini")) response = query_engine.query("What is the bootcamp about?") print(response) ``` btw, i am pretty sure that the default model that llamaindex uses with openai is gpt-3.5-turbo. look: github.com/run-llama/llama_index/blob/41643a65bc89cfdb3eb0c11b4f8cb256b02aa21c/llama-index-integrations/llms/llama-index-llms-openai/llama_index/llms/openai/base.py#L78
brother please make a video on RAG (by using the llama index), I have done it already if you need I can send you, so you can save your time for research, Please explain it in your language , please use open source model instead open ai
It is vague at this point - 12:50 "nodes are interconnected creating a network of knowledge". This is very old technique. Obviously embeddings of chunks semantically close to each other will fall in the same area in embedding space. So they are interconnected..How is this any different from chroma db or ANY xyz other vector database in the world? What is different here!
🔥Join the AI Engineer Bootcamp:
Hey there.. The second edition of the AI Engineering Cohort is starting soon.
- Learn with step-by-step lessons and exercises
- Join a community of like-minded and amazing people
- I'll be there to personally answer all your questions 🤓
- The spots are limited since I'll be directly interacting with you
You can join the waitlist now 👉 course.alejandro-ao.com/
Cheers!
Thank you for your effort. This is by far the most structured and easy to understand introduction into LlamaIndex / RAG topic.
really appreciate it! i'm glad it was helpful :)
Yes. More on LlamaIndex please. I so appreciate your clear and thoughtful tutorials. Beautifully done!
i appreciate it! absolutely 💪
Yes please make a whole series, especially the application based on usecases. @@alejandro_ao
Thank you so much man, just yesterday I was struggling with the same thing, there is no recent content on llamaindex, everything is outdated. This is a live-saver, please continue this series.
you can count on that
Been looking at this for a few weeks and this is the perfect start for anyone wanting to understand RAG and llama index. Fantastic video :)
Awesomely information-dense. Thanks man
Am considering using llamaindex for my ai application project after viewing your wonderfully done video, which is straightforward, simple and absolutely understandable. I am expecting a follow-up video by you on how to deploy llamaindex online for a realistic entrepreneur setting.
always eager to start building things after watching one of your videos. You really have a talent for explaining things super clear and easy .
Works Very Well and very low cost (in pennies per PDF). Further savings by 'Persisting the Index' ... Thank You Yet Again! I own you a whole pot of coffee ! 😄
LLamaindex looks like a survivor. Would love to see some of the advanced new features in your coming tutorials.
totally agree. i am sure they will be shaping the AI app sphere for a long time
Thanks for the up-to-date video on Llama Index! It would have been helpful to explicitly mention the deltas from half a few months back.
Excelente explicação como sempre. Parabéns.👏👏👏
Bro literally you are doing such a useful thing please do more videos its very helpful lots of love from student community ❤️
Really nice and to the point tutorial.. Thank you.
thank **you**
Merci beaucoup pour ce contenu de qualité (comme toutes tes vidéos), vivement la suite ! (j'espère que tu aborderas la création de RAG basé sur des agents)
I was searching about llamaindex yesterday on your UA-cam channel
we're in sync 😎
the moment you say good morning, I feel like I woke up on a flight with pilot announcement, Good stuff btw.
Excellent as usual! And useful as usual. Thanks and stay cool. 😎
thank you brandon! always a pleasure to see you around!
Nice & articulate. Thanks for putting this out.
I appreciate it :) expect many more to come
i like to learn llamaindex but i wonder if i will just be spreading myself too thin by trying to master both langchain and llamaindex. do you have any advice?
Their documentation is lacking, so thank you for this. Question: In your code editor, I noticed the hover-over text: "start coding or generate with AI". What code generator service and/or plugin are you using, if you don't mind me asking? 😊 (E.g. GitHub Copilot, etc). Thank you.
Edit: Ah, it's probably whatever CoLab offers. I was too focused on the LlamaIndex talk to notice the IDE was CoLab. LoL
Good video, I have seen few videos that explain this topic well, greetings from Chile
Love your videos!!! Great content here again. One question: in 30:20 where the index gets created locally, what do the subfolders look like? "image_vector_store", "graph_store"... - does this mean the dataloaders would also split a PDF in plain text, graphs, images and then store the respectivbe embeddings in separate folders? Tried it on my own PDFs but could not make much sense of the index files unfortunately...
Can this setup be implemented within a protected infrastructure? I have sensitive data that I don't want to leave my network
Very precise and much much useful!
hey there, thanks! glad to see you around!
It helped me a lot! Thanks for the video
no worries!
This is perfect
you are
Great tutorial!!
Très bonne présentation. Merci
je t'en prie !
Their recent documents are really really good
they are awesome indeed
Great video! Keep em coming! Quick question. When you load documents can it get the documents recursively inside data? Like if there are more folders inside folders?
Is there any limit to loading documents? Any aditional advice on the loading documents?
What if a document has many pages and it has a footer and a header with repetitive content? Could that affect negatively the retrieval?
Great video. So if I understand correctly, the code example only shows the parsing into documents. So no nodes, embedding/vectorising and persistent storage in a vector DB? Any observations on weak/ strengths in comparison with langchain? The parts upto vector db and the parts from user upto vector DB
Thank You✊🏾💎
😌 no problem
Interesting~
Looking forward to Agentic RAG system build with function calling and etc
Alejandro Thank you for the clear introduction to LlamaIndex. Instead of using OpenAI API , how can we use a model from hugging face?
definitely, coming up!
@@alejandro_ao thank you! waiting :)
You should become a professor it will benefit thousands of students in your country. Well taught.
i hope to do that one day! thank you, it means a lot!
Excelente
hola amigo 🙌
Can you make a video where you discuss how you can test a RAG?
Really interested to see a fully open source version of this with hugginface embeddings and models.
coming up!! sorry been super busy with the cohort 😅
can i setup llamIndex on my own server? i dont want to use api or don't want to send data to other's server
what 's the different between (llamaindex for chatbot creation) and (langchain +streamlit ...for pdf bot(the video you did last time)
which of them is more suitable if I want to create a chatbot for a company
there is a bit of overlap between the two. but you really can't go wrong with either of them. they are both very reliable and have a great community.
it seems to me that llamaindex is focusing a lot more on the data ingestion side and langchain is going more for them overall orchestration of the components. at least for now.
the good news is that you can use both :) most of the paradigms are compatible, so you can take advantage of the strengths of each one.
in the meantime, i recommend you focus on one of the two and then start implementing features from the other one as needed. you will soon get the core concepts and be able to choose which one is better suited fr a specific project 👍
Welldone boss i almost thought you stole it from Krish Naik but your adding the lamaparse made the difference
thanks mate! i'm pretty sure both of us took it from the official docs, tbh 😅
but yeah, i wanted to give a more wholesome presentation of all their offer, not only the open-source part :)
Should we pay for the openai api key? And How
I’m waiting for the local install video.
Hey AO.. Looks like the default LLM is being used which is Da Vinci. Can we upgrade to GPT4o?
hello there, absolutely. for this particular example, you can just add the model param to the query engine:
```python
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from llama_index.llms.openai import OpenAI
documents = SimpleDirectoryReader("data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine(llm=OpenAI(model="gpt-4o-mini"))
response = query_engine.query("What is the bootcamp about?")
print(response)
```
btw, i am pretty sure that the default model that llamaindex uses with openai is gpt-3.5-turbo. look: github.com/run-llama/llama_index/blob/41643a65bc89cfdb3eb0c11b4f8cb256b02aa21c/llama-index-integrations/llms/llama-index-llms-openai/llama_index/llms/openai/base.py#L78
Do you already have a video how to use Llama-Index with local llama3 instead of ChatGPT? Thanks!
Not yet, but coming up next week!
thanks
thank *you*!
can do langchain tutotrial with open source I was searching and got none in case any please give me the link
i feel like this shirt makes it look like i'm at the beach
it looks cool
😎
looking great bro!
hoping for video #2
finally here, sorry lots of work!!
brother please make a video on RAG (by using the llama index), I have done it already if you need I can send you, so you can save your time for research, Please explain it in your language , please use open source model instead open ai
Merci !
Merci Alain !!
It is vague at this point - 12:50 "nodes are interconnected creating a network of knowledge". This is very old technique. Obviously embeddings of chunks semantically close to each other will fall in the same area in embedding space. So they are interconnected..How is this any different from chroma db or ANY xyz other vector database in the world? What is different here!
LlamaIndex is a commercial product, with pricing based on usage... Ok bye. Thanks for a video anyway.
so? wdym?
Awesome content as usual
hey sami 🙌
@@alejandro_ao 🎉