wow, got it working, thank you so much. it took the better part of one day on my non-GPU laptop. next step is to repeat this with some cloud-based GPU horsepower.
I'd like to see a RAG system specifically built for working with large code bases. Most rag examples are optimised for document retrieval and citation, but I think there's a lot of room for advanced code modernisation / rewriting augmented with rag simply to enable working with large code bases (e.g. >100k tokens)
Is there an institution that ranks RAG systems? For example I would like to find out if this or multi-modal RAG from your recent video works better? Would you know?
Yes we are interested, please add multi modal and pdf processing. Also use a cheap model with prompt caching for the chunking etc. and a smart model with large context window for retrieval. To get accurate results that are vetted out. I.e gpt4o-mini for ingesting, Claude 3.5 sonnet for retrieval or so
I was able to use both latest Phi and Llama models with Ollama and it works very smoothly with LightRAG. For large set of files, I was able to create Knowledge Graph based conditional Filter for LightRAG GraphML files which increased efficiency drastically otherwise hybrid query takes much longer.
@@huuhuynguyen3025 Normal Vector RAG is much faster. LightRAG is not that fast as it still creates both Graph and Embeddings. However tt is approx 4 to 5x faster than GraphRAG for the documents which I had tested.
@@LifeCasts If you are referring to Qwen model, I had also tried using it first that but for my hardware it was working very slow hence I had to switch.
Thanks so much for your tutorials and demos. What if the data is related to products and I already process a txt with 200 products. Then next day the price is updated in 5 products. Do I need to process the whole list again? Does the old price will be remembered or it will be replaced from the rag?
Can you summarize what they are using for the edges in their graph? Also, since the graph relations are in some generic text modes (json?) can you generate the graph in one LLM and run it in another LLM? Advantages? Disadvantages?
I can't get "hybrid" mode to work with Ollama, it think like 10-15 minutes and print something unreadable as result... I try the example from repo without any modifications
@@ShyamnathSankar-b5v I just tried llama3.2:3b does not seem to be working: ``` ⠹ Processed 42 chunks, 0 entities(duplicated), 0 relations(duplicated) WARNING:lightrag:Didn't extract any entities, maybe your LLM is not working WARNING:lightrag:No new entities and relationships found INFO:lightrag:Writing graph with 0 nodes, 0 edges INFO:lightrag:Creating a new event loop in a sub-thread.````
@@crane-d5d 3b is very small. There's been some convos that large, instruct type models are needed to do the knowledge graph. One of the devs for NanoGraph, an alternative to LightRAG, suggested users switch to larger Qwen2. I think it's not coincidence that PromptEngineering is also using it. I'm running a 14b of Qwen2.5 on LightRAG and it is processing the knowledge graph.
Everything seemed to have been taking extremely long. Not sure why on a RTX 4080 12GB VRAM. When I typed in Ollama ps, during the launch, II saw Qwen working but not the embedding model.Perhaps this is the problem. Does anyone have an idea why the embedding model wouldn't be running? Any tips? Thanks in advance
Could you do this with spreadsheets, or even just a CSV file and then query the data with graphs and all? Also thank you for not hiding anything behind a .bat file on Patreon.
Any tables/charts will need a multi-modal preprocessor to convert into a format the LLMs understand, like JSON/Markdown. Docling just came out from IBM and does preprocessing of several doc types with structure inside. Unstructured is another option.
When I run "python examples/lightrag_ollama_demo.py", it throw me back a "500 internal privoxy error" and I do use shadowsocks, can anyone help me to tackle this problem?
Wounded what is the case of using it in real life? For what purpose? Graph itself without any AI it’s representation of data that not everybody is needed
wow, got it working, thank you so much. it took the better part of one day on my non-GPU laptop.
next step is to repeat this with some cloud-based GPU horsepower.
Thank you , your videos are helping me a lot, please keep uploading such videos
I'd like to see a RAG system specifically built for working with large code bases. Most rag examples are optimised for document retrieval and citation, but I think there's a lot of room for advanced code modernisation / rewriting augmented with rag simply to enable working with large code bases (e.g. >100k tokens)
Is there an institution that ranks RAG systems? For example I would like to find out if this or multi-modal RAG from your recent video works better? Would you know?
there is the MTEB (massive text embedding benchmark) for embeddings models on HuggingFace. but “which would be better” depends on your application
Yes we are interested, please add multi modal and pdf processing. Also use a cheap model with prompt caching for the chunking etc. and a smart model with large context window for retrieval. To get accurate results that are vetted out. I.e gpt4o-mini for ingesting, Claude 3.5 sonnet for retrieval or so
can you elaborate on your process, which model you used for each part and how
How can I give a chunked csv or json file as input?
I was able to use both latest Phi and Llama models with Ollama and it works very smoothly with LightRAG. For large set of files, I was able to create Knowledge Graph based conditional Filter for LightRAG GraphML files which increased efficiency drastically otherwise hybrid query takes much longer.
how it fast compare to normal RAG?
Your model selection is itself not great
@@LifeCasts As long as it works well for my usecase with excellent results, I don't mind
@@huuhuynguyen3025 Normal Vector RAG is much faster. LightRAG is not that fast as it still creates both Graph and Embeddings. However tt is approx 4 to 5x faster than GraphRAG for the documents which I had tested.
@@LifeCasts If you are referring to Qwen model, I had also tried using it first that but for my hardware it was working very slow hence I had to switch.
Thanks for that. I am confused with the types of queries, what are naive vs local vs global vs hybrid ?
Thanks so much for your tutorials and demos. What if the data is related to products and I already process a txt with 200 products. Then next day the price is updated in 5 products. Do I need to process the whole list again? Does the old price will be remembered or it will be replaced from the rag?
How can we make it return the "reference documents" that it uses for answering?
Definitely more lightrag!
Can you summarize what they are using for the edges in their graph?
Also, since the graph relations are in some generic text modes (json?) can you generate the graph in one LLM and run it in another LLM? Advantages? Disadvantages?
What about an existing knowledge graph in neo4j for example ? Can you enrich an existing graph ?
is it possible to combine multiple article and build one big knowledge graph?
Yes, that's possible. With new updates it supports a lot more than text files now.
What about options={num_ctx=32000} at the function? Is it not supported?
Can you please make a video on the road map for Learning LLM or generative AI
I can't get "hybrid" mode to work with Ollama, it think like 10-15 minutes and print something unreadable as result... I try the example from repo without any modifications
Sorry, what version of qwen 2 are you using? 7B? Also, what minimum specs would you recommend for running it? Thank you.
4gb vram is minimum for running 7b model better use llama3.2 3b
@@ShyamnathSankar-b5v I just tried llama3.2:3b does not seem to be working: ``` ⠹ Processed 42 chunks, 0 entities(duplicated), 0 relations(duplicated)
WARNING:lightrag:Didn't extract any entities, maybe your LLM is not working
WARNING:lightrag:No new entities and relationships found
INFO:lightrag:Writing graph with 0 nodes, 0 edges
INFO:lightrag:Creating a new event loop in a sub-thread.````
@@rubencontesti221 well I have tried this by uploading a small amount of text , it works fine
@@rubencontesti221 have the same problem. Have you solved it?
@@crane-d5d 3b is very small. There's been some convos that large, instruct type models are needed to do the knowledge graph. One of the devs for NanoGraph, an alternative to LightRAG, suggested users switch to larger Qwen2. I think it's not coincidence that PromptEngineering is also using it. I'm running a 14b of Qwen2.5 on LightRAG and it is processing the knowledge graph.
Everything seemed to have been taking extremely long. Not sure why on a RTX 4080 12GB VRAM.
When I typed in Ollama ps, during the launch, II saw Qwen working but not the embedding model.Perhaps this is the problem. Does anyone have an idea why the embedding model wouldn't be running? Any tips?
Thanks in advance
Could you do this with spreadsheets, or even just a CSV file and then query the data with graphs and all? Also thank you for not hiding anything behind a .bat file on Patreon.
How does this perform with colbert or copali?
Can we use Light RAG for documents that contain images/tables and charts?
Any tables/charts will need a multi-modal preprocessor to convert into a format the LLMs understand, like JSON/Markdown. Docling just came out from IBM and does preprocessing of several doc types with structure inside. Unstructured is another option.
When I run "python examples/lightrag_ollama_demo.py", it throw me back a "500 internal privoxy error" and I do use shadowsocks, can anyone help me to tackle this problem?
change ollama to host on localhost instead of 0.0.0.0, hope it help
Is there any way we can use gemini models ?
Not sure if they have openai compatible api but if not, I think they can be used via litellm proxy. Will explore
ask gemini to write code for gemini connection )))
Wounded what is the case of using it in real life? For what purpose?
Graph itself without any AI it’s representation of data that not everybody is needed
Brother, this is too hectic. Takes a lot of time in downloading and running these models. Oh god