one thing what I like about this man is - he shows some background on each line / framework / library used to make people aware about all those nuances interactions b/w projects and researchers involved in it. love that
Colpali is an excellent technique for English documents. When you try to use non-English documents, the retrieval doesn't work well because colapli uses the paligemma model which is a relatively small model trained mostly on an English data set
good point, but I think you can finetune the vision model for other languages. Qwen is probably a good option there as well. Will see if there are any resources available and will share.
Can you make an end to end project where instead of an index we throw the embedding to a vectorstore like chromadb or pinecone or something would be amazing
Great work! Thanks! I wonder how it compares to vanilla RAG for text pdfs in terms of accuracy? Vanilla RAG suffers when the answer for the user question needs to be synthesized from different parts of the text. GraphRAG is good for those cases bit it is slow and expensive. Can this handle complex questions like those?
How do you "chunk" or parse sections out of longer documents? Or if we want to create a Knowledge graph? The ultimate analysis is done on an LLM, so we still have context length issues especially for local implementations. Can you extract the text itself for further processing?
I wonder if a VBRAG could perform math calculations extracted from an image table? 🤔 I suppose if the results are accurate they could then be passed to another agent capable of calculations on the result?
Man I love your videos and I'm a big fan. I just have one request and I wonder if everyone else agrees. I get very annoyed and distracted each time your video recording software zooms in and out of the the page or screen that you are showing. The problem is it's inconsistent and keeps jumping around. Please ,please consider making it static, unless you are zooming in for a longer period of time, like writing a code in VS. Thank you and keep the amazing work.
one thing what I like about this man is - he shows some background on each line / framework / library used to make people aware about all those nuances interactions b/w projects and researchers involved in it. love that
Colpali is an excellent technique for English documents.
When you try to use non-English documents, the retrieval doesn't work well because colapli uses the paligemma model which is a relatively small model trained mostly on an English data set
good point, but I think you can finetune the vision model for other languages. Qwen is probably a good option there as well. Will see if there are any resources available and will share.
qwen 2 VL is good for indic languages atleast from what i have tested
Excellent. Exactly what I was looking for. A "fine-tuning" episode of such a VBRAG pipeline would be a great followup episode.
good idea, will look into it.
Can you make an end to end project where instead of an index we throw the embedding to a vectorstore like chromadb or pinecone or something would be amazing
Thank you for sharing this!
Great work! Thanks! I wonder how it compares to vanilla RAG for text pdfs in terms of accuracy? Vanilla RAG suffers when the answer for the user question needs to be synthesized from different parts of the text. GraphRAG is good for those cases bit it is slow and expensive. Can this handle complex questions like those?
EXCELLENT VIDEO - THANK YOU
I tried to do this technique but with gemini-1.5-flash-exp-0827 and it works fine.
Cool find in Claudette
How do you "chunk" or parse sections out of longer documents? Or if we want to create a Knowledge graph? The ultimate analysis is done on an LLM, so we still have context length issues especially for local implementations. Can you extract the text itself for further processing?
Why do we need a large Vram GPU ? where for Colpali or VLM ?
SAME QUESTION FROME ME
may be if you pass an Image URL instead of the Image bytes you will consume less input tokens so less Cost?
Will this work properly on pdf comprising detailed tabular information? And the hand drawn images?
Do you think one could use this and convert a pdf into a text file which can be used to generate a knowledge graph using Microsoft's GraphRAG?
What is the best way to contact you for consulting with our dev company?
I wonder if a VBRAG could perform math calculations extracted from an image table? 🤔 I suppose if the results are accurate they could then be passed to another agent capable of calculations on the result?
math might be a little hard but I think its worth trying.
Thanks
Very good content, thank you.
Glad you liked it!
many thanks for this great video…I have a set of scanned pages saved as pdf. will this work?..thanks..
Yes, I think this approach will work on scanned pages as well.
thanks
This does not run on local Nivida RTX 4x 16 RAM GPU ?
I think that will be able to run the pipeline.
But this works only for pdf, what about docx, pptx, epub files? I want to work multimodal on those files too.
it works with whatever that can be converted to image, so everything
Man I love your videos and I'm a big fan. I just have one request and I wonder if everyone else agrees. I get very annoyed and distracted each time your video recording software zooms in and out of the the page or screen that you are showing. The problem is it's inconsistent and keeps jumping around. Please ,please consider making it static, unless you are zooming in for a longer period of time, like writing a code in VS. Thank you and keep the amazing work.
thanks for pointing it out. that's a good feedback. Will see what I can do.
Offline work???
if you watch the video, you will know the answer :)
None of these solutions are open source..even in your other videos. I think the video you have that uses marker is the only one