Use case is to extract the relevant text information along with images available in the file using generative ai, When any prompt is given then relevant text information and image should display as response.
Thanks your videos are very helpful. I have several Gigs of pdf ebooks that i would like to process with RAG. What do you think what approach would be the best, this or a graphrag. In my case i'm looking only for local models as the costs would be very high. What if to convert all pdf pages into images first and then process them with local model like phi 3 vision and then process it with Graphrag, would it work out?
It is essential to conduct a thorough preprocessing of the documents before entering them into the RAG. This involves extracting the text, tables, and images, and processing the latter through a vision module. Additionally, it is crucial to maintain content coherence by ensuring that references to tables and images are correctly preserved in the text. Only after this processing should the documents be entered into a LLM.
@@engineerprompt yeah as i was expected, but what if i pass an image that VLM doesn't understand, for example personal image not available online, i should first fine tune the VLM on my images then do what u said right?
This approach is not good enough to add value. The pictures and text needs to be referenced and linked in both vector stores to create better similarities.
If you want to learn RAG Beyond Basics, checkout this course: prompt-s-site.thinkific.com/courses/rag
Does it cover how to minimize (or even eliminate) hallucinations, and that the result would ALWAYS consider the content added into the RAG "database"?
Keep going with this approach, it is something I have been struggling with.
Me too. For my case, the answer is normally hidden behind the data, context and the images.
This is the best AI channel out there, PERIOD. Thanks for sharing your knowledge
a nice open source and self hosted version would be great
Such an insightful information, Eagerly waiting for more multimodel approches.
Thanks, is there a video of the same project, but with langchain instead of llama index?
Great stuff.
What about make same, but using LLAMA3 or less local LLM?
Use case is to extract the relevant text information along with images available in the file using generative ai, When any prompt is given then relevant text information and image should display as response.
We need more videos on this topic
Thanks your videos are very helpful. I have several Gigs of pdf ebooks that i would like to process with RAG. What do you think what approach would be the best, this or a graphrag. In my case i'm looking only for local models as the costs would be very high. What if to convert all pdf pages into images first and then process them with local model like phi 3 vision and then process it with Graphrag, would it work out?
Hi your videos are very helpful thank you
Glad you like them!
Out of interest what is the application called that you used to illustrate the flows? (2:53 in the video) thanks.
I am using mermaid code for this.
@@engineerprompt thanks. Great video btw 👍🏻
Lots of good info, thanks
Can you pls dive deeper into why qdrant was used and other vector dbs limitations to store both text and image embeddings, thx
will see if I can create a video on it.
Very nice video but if you can do it with open source embedding model it would be very cool. thank you for the video
I appreciate your effort. Pl create one to fine tune the model for efficient retrieval if possible, with lang chain.
can you make it using comeplete open source models?
do you think all of this is now replaced with Gemini ?
Need to do it all in open source. No API Keys.
It is essential to conduct a thorough preprocessing of the documents before entering them into the RAG. This involves extracting the text, tables, and images, and processing the latter through a vision module. Additionally, it is crucial to maintain content coherence by ensuring that references to tables and images are correctly preserved in the text. Only after this processing should the documents be entered into a LLM.
agree!
That's a lot of work. Can an AI do this?
@@jtjames79 Yup :)
it's great job! Thanks
thanks :)
Can we do this method using Langchain ?
Yes, will be creating a video on it.
Is it better than GraphRAG? How does the output quality compare to it?
You could potentially create a graphRAG on top of it.
What if the user query contain text + image?
You can you a VLM to generate description of the images and send that as part of the text query
@@engineerprompt yeah as i was expected, but what if i pass an image that VLM doesn't understand, for example personal image not available online, i should first fine tune the VLM on my images then do what u said right?
Thanks
wheres the code used?
This approach is not good enough to add value. The pictures and text needs to be referenced and linked in both vector stores to create better similarities.
watch my latest video :)
U have any work?
Which video @@engineerprompt
I except image generation will be have another kind of breed... image gen based on image understanding based on facts