What scenarios do you see GraphRAG being useful for? UPDATE: GraphRAG is now open source! Check out the release announcement video here: ua-cam.com/video/dsesHoTXyk0/v-deo.html
Any where, where relationships are important. Abstract associations between data sets, perhaps laws, policies, etc, things that are very narrative driven, such as stories, etc. Nontypical datasets basically.
This is basically causal grounding. We figure semantic symbolic reasoning, from an architectural perspective. Add a powerful model…something very compelling AGI-like would be the result I would assume(plus mcts sampling lol). Causal grounding is huge hole in current models. This is dope research. Kudos.
I've been doing work in the area of creating knowledge graphs for codebases. The nice thing about generating them for code (as opposed to text) is that you don't have to rely on LLM calls to recognize and generate relationships, but you can utilize language servers and language parsers for that.
that's interesting, what kind of insight can you get from the derived structure? I don't think code agents are leveraging language servers enough, it just looks like they only rag vector-search for context.
While RAG is a good process for eliminating hallucinations, GraphRAG makes the retrieved context richer with its relationship-building techniques. The expense is worth it. Is the result set then re-graphed, or will the same query twice be as expensive?
this was so well explained, nicely done. my first thoughts: 1. i'd be curious to see benchmarks with cheaper LLMs. from my experience, even much smaller models like llama-3-8b can come close to gpt-4 in this use-case (entity extraction and relationships). a little fine-tuning could likely match or surpass gpt-4 for much cheaper. 2. i wonder how this could be augmented with datasources which already have some concept of relationships, ie wikipedia, dictionaries, hypertext.
Is there no standard comparison approach? For example one could take academic literature reviews, collect their references, throw in some more, and ask the llm system. Compare the result with the original review. There might be summaries available in the accounting and legal world, that could be used also
I really like the addition of hierarchical agglomerative summarization, which gives holistic aanswers similar to RAPTOR RAG strategy but with the better data representation of knowledge graphs. I'll need to read the paper to understand if embeddings are used at all in this, and whether relationships are labelled or if they just have a strength value.
This could be a game-changer in both public and private-sector intelligence analysis (as I am sure you figured out.) Looking forward to additional info - but what about the private dataset's format? Is it vectorized? If so, can we assume that there are optimal and sub-optimal approaches? (IOW, is it fair to assume vectorization can significantly impact GraphRAG's performance?)
Are there any accelerators to convert a typical knowledge corpus of unstructured text to a knowledge graph conducive for graph rag? I understand we need to extract entities and figure out relationships but who does that work? An LLM?
Hi, how about the abstract ideas visualization in multidimensional space eg. some mathematical ideas and relations to other ideas - eg. in form of knowledge graph and/or deeper details research possibilities tools? Are there any such tools based on ai?
Excuse me if I’m wrong… listened to this while exercising… but the main issue explored here for each question was that questions like “what are the top themes?” Cannot be answered by the LLM with vanilla RAG. Is this correct? If so, then if context size grows large enough this will be less necessary right? Furthermore, by introducing a graph that has communities premised on topics/themes or whatever u decide, doesn’t that reduce the degrees of freedom of your system?
Would love the opportunity to contribute to this project, super interesting. How easy is it to update existing knowledge graphs periodically when new data comes in? Is there a “reindexing” cost?
Great work! I was thinking to use a system like this to build the memory of an AI companion as it talks to the user. So in this case the knowledge graph will start empty and grow get built dynamically with every conversation. Do you see this as a good use case for GraphRAG?
This approach is really good but don't you think that extracting entities and then making relationships between extracted entities is an expensive operation if we use GPT4 or Gemini?
Great, this is something I also thought about when AI had difficulty finding relevant information a while back. Basically have filters to determine how the AI will maneuver the training data depending on what is prompted and relevance. This is something I thought about after reading a paper on the discovery of a new hybrid braincell type that acted as a trigger that could turn on and off pathways. So the context in the prompt is what's important. Because that decides which tags in the training data should be turned on and off. Which in the end will give you a unique pathway for the AI to retrieve data.
Also, the next step would create overarching filters between several AI agents. After you have all this, the next step is for AI to implement statistics in its reasoning.
To understand semantic search first you need to understand how HNSW works, then you realice no wonder it dosent work. I ended up building a datastructure to combine vector search and entities
Knowledge graphs have a ton of formal methods to work with them. If you can get it into RDF then you can now use all the RDF tooling, or analyse it in cytoscape, or whatever.
oh hey that's obsidian note style of note making it is interesting AI actually can remember better with the help of zettelkasten like human do!? can't wait until japan researcher conclude their research using chemical reactions in tube to emulate emotions, so machine can felt emotions through chemical reactions, like human do.... to me emotional are also the best way to learn and remembering things.
so what if... instead of tube of chemical reactions... important informations and often asked questions had an emotional cue graph to create some kind of important profiling so that profile will serve as a mark whenever AI is the expert in that field (strong retrieval in specific field leading for future of MoE)
great research topics but as an hands-on nlp engineer on ner boosted knowledge graphs and LLMs , my experiences say too naive to believe that it would work in productions systems
I don’t think that’s the case. Optimized graph query engines can return results in milliseconds e.g. WikiMedia, Google etc. at a fraction of the computational cost of an LLM. The reason that GraphRAG is slow-ish is because the LLMs are slow.
but don't you lose information in the process of making a knowledge graph, given how only a subset of the textual information is extracted and retained in the KG?
And I hope you know that you can zoom in to the screen to not see them and that it's always better not to say anything if you don't have anything nice or useful to say 😊
first of all, there is no MOVEMENT but state sponsored Russian proxies like the Yemenis. a very wrong choice of a dataset. second thing - there is NO NOVOROSSIA
You could save yourselves the political/propaganda element. From all uncounted number of articles available, the choice of this particular topic is more than flashy. It's ridiculous for techie persons who are expected to be smart in general...
Haha, and skewed... Crickets for Gaza... but Odessa is worth mentioning? This is why it's best to avoid politics when we're trying to stay on task, especially when dealing with tech that's literally forming and pruning knowledge graphs based on topics/themes...
What scenarios do you see GraphRAG being useful for?
UPDATE: GraphRAG is now open source! Check out the release announcement video here: ua-cam.com/video/dsesHoTXyk0/v-deo.html
Using GraphRAG to make GraphRAGs.
Because AI should be able to go down the rabbit hole.
Profiling people
Any where, where relationships are important. Abstract associations between data sets, perhaps laws, policies, etc, things that are very narrative driven, such as stories, etc. Nontypical datasets basically.
@@Sergio-rq2mm I choose to go the 1984 route
Bible study
This is basically causal grounding. We figure semantic symbolic reasoning, from an architectural perspective. Add a powerful model…something very compelling AGI-like would be the result I would assume(plus mcts sampling lol). Causal grounding is huge hole in current models.
This is dope research. Kudos.
what is causal grounding
Looking forward to the code for this!
I've been doing work in the area of creating knowledge graphs for codebases. The nice thing about generating them for code (as opposed to text) is that you don't have to rely on LLM calls to recognize and generate relationships, but you can utilize language servers and language parsers for that.
that's interesting, what kind of insight can you get from the derived structure? I don't think code agents are leveraging language servers enough, it just looks like they only rag vector-search for context.
I'd love to hear more about this. Any code you can share?
While RAG is a good process for eliminating hallucinations, GraphRAG makes the retrieved context richer with its relationship-building techniques. The expense is worth it. Is the result set then re-graphed, or will the same query twice be as expensive?
this was so well explained, nicely done. my first thoughts:
1. i'd be curious to see benchmarks with cheaper LLMs. from my experience, even much smaller models like llama-3-8b can come close to gpt-4 in this use-case (entity extraction and relationships). a little fine-tuning could likely match or surpass gpt-4 for much cheaper.
2. i wonder how this could be augmented with datasources which already have some concept of relationships, ie wikipedia, dictionaries, hypertext.
i was having thoughts🙂
GPT 4 not understanding these deep relationships is bar far the biggest bottleneck in me using it. This is super exciting
glad, i didn't skip this and watched video, thanks for sharing knowledge. seems very impressive.
This seems very powerful. Thanks for sharing it and explaining it well.
That final streamlit app was awesome!!
Is there an Open source implementation of this or how could I build it into my own app?
Seems like the video was incomplete. Is there another part
Is there no standard comparison approach? For example one could take academic literature reviews, collect their references, throw in some more, and ask the llm system. Compare the result with the original review. There might be summaries available in the accounting and legal world, that could be used also
Comparison is tough! It's another area of research we're heavily invested in. But I like the ideas that you're bringing up!
true that validation would be required to compare the result.
May I know the underlying technology used for hosting the graph database? Was it Cosmos db?
Likely neo4j
It's graph database agnostic! You can use your choice of Graph DB. The technique is general enough to support multiple
It's not about the datbase, it's about the methodlogy. RDF or PL graphs should both work
I really enjoyed this video! What tool did you use to visualise the POD cast graph?
Does the repeated term“regular RAG” refer to setups using vector databases?
How is this any different then Self Organizing Maps for RAG?
Hii, i am working on solving the same problem of vector search rag is not good. can you plz share the code a tutorial will be even great !!
I really like the addition of hierarchical agglomerative summarization, which gives holistic aanswers similar to RAPTOR RAG strategy but with the better data representation of knowledge graphs. I'll need to read the paper to understand if embeddings are used at all in this, and whether relationships are labelled or if they just have a strength value.
Relationships are not labelled but they have descriptions.
This could be a game-changer in both public and private-sector intelligence analysis (as I am sure you figured out.) Looking forward to additional info - but what about the private dataset's format? Is it vectorized? If so, can we assume that there are optimal and sub-optimal approaches? (IOW, is it fair to assume vectorization can significantly impact GraphRAG's performance?)
Are there any accelerators to convert a typical knowledge corpus of unstructured text to a knowledge graph conducive for graph rag? I understand we need to extract entities and figure out relationships but who does that work? An LLM?
Hi, how about the abstract ideas visualization in multidimensional space eg. some mathematical ideas and relations to other ideas - eg. in form of knowledge graph and/or deeper details research possibilities tools? Are there any such tools based on ai?
Thanks for the video. I can see a Use Case in my energy industry. Does GraphRAG work across all "modes" and "modalities"?
Excuse me if I’m wrong… listened to this while exercising… but the main issue explored here for each question was that questions like “what are the top themes?” Cannot be answered by the LLM with vanilla RAG. Is this correct?
If so, then if context size grows large enough this will be less necessary right?
Furthermore, by introducing a graph that has communities premised on topics/themes or whatever u decide, doesn’t that reduce the degrees of freedom of your system?
Thank for sharing👍
Is there a way to retrieve specific area of the graph than providing total graph to the LLM.
fabulous work! wondering how long it takes to form a whole vector db and plus how many tokens will it take?
Any upadte on that streamlit code , that would be helpful thanks
Is the rest of this conversation available somewhere, @alexchaomander?
Would love the opportunity to contribute to this project, super interesting.
How easy is it to update existing knowledge graphs periodically when new data comes in? Is there a “reindexing” cost?
Run this on the Lex Fridman podcast library!
What is technology stack for that?
pls provide the code
Code will be shared soon!
+1 🙏
@@alexchaomander Great! I have signed up for your newsletter. Will you inform about the code release there?
le dot
+1
I am waiting eagerly for the code of this paper.
Great work! I was thinking to use a system like this to build the memory of an AI companion as it talks to the user. So in this case the knowledge graph will start empty and grow get built dynamically with every conversation. Do you see this as a good use case for GraphRAG?
Yes!
Very nice presentation and explanation
This approach is really good but don't you think that extracting entities and then making relationships between extracted entities is an expensive operation if we use GPT4 or Gemini?
I thought the same. Using knowledge graph is super but how we going to create it with less computation resources and less cost?
Pretty soon, everyone will be graphragging their podcasts. Jre will be neat.
Can you provide the code for this? Would be amazing!
Great, this is something I also thought about when AI had difficulty finding relevant information a while back.
Basically have filters to determine how the AI will maneuver the training data depending on what is prompted and relevance.
This is something I thought about after reading a paper on the discovery of a new hybrid braincell type that acted as a trigger that could turn on and off pathways.
So the context in the prompt is what's important. Because that decides which tags in the training data should be turned on and off.
Which in the end will give you a unique pathway for the AI to retrieve data.
Also, the next step would create overarching filters between several AI agents.
After you have all this, the next step is for AI to implement statistics in its reasoning.
Is the code available?
Please let me play with this! Impressive work !
When will it be open sourced? :)
This is powerful!
This is amazing!
Whats the database used?
This is just brilliant
Streamlit code would be great, thanks
Would be a great tool for rapid and more reliable meta analysis
Does chatgpt (paid version) use graph rag?
To understand semantic search first you need to understand how HNSW works, then you realice no wonder it dosent work. I ended up building a datastructure to combine vector search and entities
Its a month later, where's the code you promised? Please?
This is outstanding stuff!
is there source code anywhere for this?
I don't understand. Why do we need GraphRag, when an LLM can summarise the text and find relationships ?
Knowledge graphs have a ton of formal methods to work with them. If you can get it into RDF then you can now use all the RDF tooling, or analyse it in cytoscape, or whatever.
GraphRAG Perfect !
Hi, are you going to share the code?
oh hey that's obsidian note style of note making
it is interesting AI actually can remember better with the help of zettelkasten like human do!?
can't wait until japan researcher conclude their research using chemical reactions in tube to emulate emotions, so machine can felt emotions through chemical reactions, like human do.... to me emotional are also the best way to learn and remembering things.
so what if... instead of tube of chemical reactions...
important informations and often asked questions had an emotional cue graph to create some kind of important profiling so that profile will serve as a mark whenever AI is the expert in that field (strong retrieval in specific field leading for future of MoE)
great research topics but as an hands-on nlp engineer on ner boosted knowledge graphs and LLMs , my experiences say too naive to believe that it would work in productions systems
I assume it's open source because why would someone pay to have gpt4 parse and organize their data. Takes 2 seconds to roll your own.
But knowledge graphs are very slow to query. I wonder if we can encode those graphs in the gpt model by building graph transformers.
I don’t think that’s the case. Optimized graph query engines can return results in milliseconds e.g. WikiMedia, Google etc. at a fraction of the computational cost of an LLM.
The reason that GraphRAG is slow-ish is because the LLMs are slow.
Google, Facebook, and Linkedin all use graph databases, it's actually much faster than relational DBs
Slower than LLMs?
è bellissimo!
Okay... we know graph rag is good. duh. How is it implemented, how do you feed it to the LLM, how do you store the data
but don't you lose information in the process of making a knowledge graph, given how only a subset of the textual information is extracted and retained in the KG?
I don't think the LLM really needs the graph to make any decisions. Its more valuable for human users to find related information
You can use ETL to build your knowledge graph by yourself from RDMSs, then you will not loose information
implementations?
Admit it: y'all built graphrag first for use by the CIA. This is not a joke.
Police, FBI, CIA, etc... investigations (CSI AI)
Why do we need other faces on screen ? Hope they know they are just distractions :)
And I hope you know that you can zoom in to the screen to not see them and that it's always better not to say anything if you don't have anything nice or useful to say 😊
What's a rag
Retrieval Augmented Generation (use that as an input to your favourite search engine or AI companion)
Where did you get the body for this? This whole text is taken from some chief Russian propaganda bureau 😅
American princess Google Plex SEO Sandra Mitra watching.....
first of all, there is no MOVEMENT but state sponsored Russian proxies like the Yemenis.
a very wrong choice of a dataset.
second thing - there is NO NOVOROSSIA
You could save yourselves the political/propaganda element.
From all uncounted number of articles available, the choice of this particular topic is more than flashy. It's ridiculous for techie persons who are expected to be smart in general...
The content is very political..
Haha, and skewed... Crickets for Gaza... but Odessa is worth mentioning?
This is why it's best to avoid politics when we're trying to stay on task, especially when dealing with tech that's literally forming and pruning knowledge graphs based on topics/themes...