one thing what I like about this man is - he shows some background on each line / framework / library used to make people aware about all those nuances interactions b/w projects and researchers involved in it. love that
Colpali is an excellent technique for English documents. When you try to use non-English documents, the retrieval doesn't work well because colapli uses the paligemma model which is a relatively small model trained mostly on an English data set
good point, but I think you can finetune the vision model for other languages. Qwen is probably a good option there as well. Will see if there are any resources available and will share.
Can you make an end to end project where instead of an index we throw the embedding to a vectorstore like chromadb or pinecone or something would be amazing
Great work! Thanks! I wonder how it compares to vanilla RAG for text pdfs in terms of accuracy? Vanilla RAG suffers when the answer for the user question needs to be synthesized from different parts of the text. GraphRAG is good for those cases bit it is slow and expensive. Can this handle complex questions like those?
Im a big fan of this approach. I do have a question here? We are anyway feeding an image and query to the LLM at the end? why not pass pdf itself to the LLM like claude we using at the end?
It's actually an interesting question about how Claude is parsing the pdf you upload to it. Is it also treating it like an image or just turning it into text? Think you're probably right that you'd get better results from parsing the pdf though
Hi, great video! When I ran the `RAG.index()` function from byaldi on my T4 instance, it took about 20 seconds per PDF page. Is this expected? Also, does byaldi support GPU for embedding, and is it automatically utilized? Thanks!
What if the information to answer a question is in two consecutive pages, and only the first one is retrieved because the second only contains the continuation of the first. This is a real problem.
You can ask to retrieve multiple images/pages or can append the neighboring pages to your context. To make all this simple, I have put together an OSS, video coming soon: github.com/PromtEngineer/localGPT-Vision
How do you "chunk" or parse sections out of longer documents? Or if we want to create a Knowledge graph? The ultimate analysis is done on an LLM, so we still have context length issues especially for local implementations. Can you extract the text itself for further processing?
I wonder if a VBRAG could perform math calculations extracted from an image table? 🤔 I suppose if the results are accurate they could then be passed to another agent capable of calculations on the result?
one thing what I like about this man is - he shows some background on each line / framework / library used to make people aware about all those nuances interactions b/w projects and researchers involved in it. love that
Colpali is an excellent technique for English documents.
When you try to use non-English documents, the retrieval doesn't work well because colapli uses the paligemma model which is a relatively small model trained mostly on an English data set
good point, but I think you can finetune the vision model for other languages. Qwen is probably a good option there as well. Will see if there are any resources available and will share.
qwen 2 VL is good for indic languages atleast from what i have tested
Excellent. Exactly what I was looking for. A "fine-tuning" episode of such a VBRAG pipeline would be a great followup episode.
good idea, will look into it.
Can you make an end to end project where instead of an index we throw the embedding to a vectorstore like chromadb or pinecone or something would be amazing
Great work! Thanks! I wonder how it compares to vanilla RAG for text pdfs in terms of accuracy? Vanilla RAG suffers when the answer for the user question needs to be synthesized from different parts of the text. GraphRAG is good for those cases bit it is slow and expensive. Can this handle complex questions like those?
Im a big fan of this approach. I do have a question here? We are anyway feeding an image and query to the LLM at the end? why not pass pdf itself to the LLM like claude we using at the end?
It's actually an interesting question about how Claude is parsing the pdf you upload to it. Is it also treating it like an image or just turning it into text?
Think you're probably right that you'd get better results from parsing the pdf though
Infact Quadrant also supports multi vector embeddings
Thank you for sharing this!
Why do we need a large Vram GPU ? where for Colpali or VLM ?
SAME QUESTION FROME ME
I tried to do this technique but with gemini-1.5-flash-exp-0827 and it works fine.
Hi, great video! When I ran the `RAG.index()` function from byaldi on my T4 instance, it took about 20 seconds per PDF page. Is this expected? Also, does byaldi support GPU for embedding, and is it automatically utilized?
Thanks!
Cool find in Claudette
EXCELLENT VIDEO - THANK YOU
Very good content, thank you.
Glad you liked it!
Will this work properly on pdf comprising detailed tabular information? And the hand drawn images?
What is the advantage of using these VLLM methods instead of just converting the pdf to makrdown?
What if the information to answer a question is in two consecutive pages, and only the first one is retrieved because the second only contains the continuation of the first.
This is a real problem.
You can ask to retrieve multiple images/pages or can append the neighboring pages to your context. To make all this simple, I have put together an OSS, video coming soon: github.com/PromtEngineer/localGPT-Vision
How do you "chunk" or parse sections out of longer documents? Or if we want to create a Knowledge graph? The ultimate analysis is done on an LLM, so we still have context length issues especially for local implementations. Can you extract the text itself for further processing?
unstructured, llamaparse, upstage parse
Would this approach work well with summarization with qwen2 vl 7b locally for technical papers with diagrams. Thank you.
Yes, checkout the localgpt-vision project that implements and end to end vision based RAG. ua-cam.com/video/YPs4eGDpIY4/v-deo.html
may be if you pass an Image URL instead of the Image bytes you will consume less input tokens so less Cost?
What is the best way to contact you for consulting with our dev company?
Do you think one could use this and convert a pdf into a text file which can be used to generate a knowledge graph using Microsoft's GraphRAG?
many thanks for this great video…I have a set of scanned pages saved as pdf. will this work?..thanks..
Yes, I think this approach will work on scanned pages as well.
Thanks
I wonder if a VBRAG could perform math calculations extracted from an image table? 🤔 I suppose if the results are accurate they could then be passed to another agent capable of calculations on the result?
math might be a little hard but I think its worth trying.
This does not run on local Nivida RTX 4x 16 RAM GPU ?
I think that will be able to run the pipeline.
thanks
But this works only for pdf, what about docx, pptx, epub files? I want to work multimodal on those files too.
it works with whatever that can be converted to image, so everything
None of these solutions are open source..even in your other videos. I think the video you have that uses marker is the only one
Offline work???
if you watch the video, you will know the answer :)