Nice demonstration of how performance can improve with in-context learning by providing examples of cypher queries in the prompts to gemini. Once we figure out how conversational digital agents can work well enough as interfaces to knowledge graphs and other open data resources, we can optimize it to work on personal devices such as smartphones. Some smartphones now have 16 GB of RAM.
16:20 - Even though the cypher queries are giving correct context (answer) - the LLM is still responding with I don't know the answer. How do we fix that?
Nice question. One solution we could try to solve this problem is fine-tuning the model. Another approach is to create our own pipeline to directly pass the content retrieved from the database to the LLM. This way, we can control it more flexibly.
Hi Geraldus - great post! I understand the approach of 'learning by example' through question/query pairs. I have a graph that contains semantics about relationships and descriptions of node attributes and labels. Why not train the LLM on the graph meta model itself?
Hey John, great question! I agree with you. Additionally, when it comes to production, we need a high-performance system, and to achieve that, we might need to consider fine-tuning. Here's another video of mine that might help answer your question: ua-cam.com/video/7VU-xWJ39ng/v-deo.html
Thanks for sharing! What Python version are you using in this demo? EDIT: I asked this question because I had some issues with python 3.9, switching to python 3.10 did the trick! ✨
Thanks again for making this video! I have a question: Why the correct answer is in the full context and still, the model replies with "I don't know the answer." ?! 🤔
very nice tutorial! By the way how can we make sure we only extract the important and relevant text from our own documents (like txt files or PDFs) to create nodes and relationships in Neo4j? I mean, PDFs often have a lot of extra stuff we don't care about.
This is an intriguing challenge. I am still exploring the best way to achieve this. Once I find a suitable solution, I will create a tutorial on it. I'm glad you asked about this!
Hi ! Great stuff and explanations thanks ! Do you think it is possible to the LLM to understand a non-LLM-created neo4J ? Like take any neo4j and read through it and understand it to answer questions from user ? using the same workflow ? would be soooooo awesome !
Hey, thank you! Anyway sorry, I didn't really understand what you meant by 'non-LLM created Neo4J'. Do you mean the schema (entity and relationship)? If so, then yes, you could try using predefined data in Neo4J, like the movie database. You can directly interact with it using LLM. However, here's the point: - To achieve better results, in my experience, we still need to fine-tune the model. - Secondly, in a real-world scenario, we would want to insert our own data, right? That's why we extract the entity and relationship from our data (e.g text, pdf) using LLM and push it to Neo4J
really underappreciated. This guy has the most easy and complete explanation of the process than others.
Thank you!
Thank you for sharing...
@@paulntalo1425 My pleasure, hope it helpful!
Nice demonstration of how performance can improve with in-context learning by providing examples of cypher queries in the prompts to gemini.
Once we figure out how conversational digital agents can work well enough as interfaces to knowledge graphs and other open data resources, we can optimize it to work on personal devices such as smartphones. Some smartphones now have 16 GB of RAM.
Great thought John!
16:20 - Even though the cypher queries are giving correct context (answer) - the LLM is still responding with I don't know the answer. How do we fix that?
Nice question. One solution we could try to solve this problem is fine-tuning the model. Another approach is to create our own pipeline to directly pass the content retrieved from the database to the LLM. This way, we can control it more flexibly.
Thank you!, I am curious in how to make neo4j show the graph, do I need to write it or it can auto generate from my data?
in the people query load csv with header is giving me problem. what is in data1.csv
Great stuff !!
Glad it was helpful!
Hi Geraldus - great post! I understand the approach of 'learning by example' through question/query pairs. I have a graph that contains semantics about relationships and descriptions of node attributes and labels. Why not train the LLM on the graph meta model itself?
Hey John, great question! I agree with you. Additionally, when it comes to production, we need a high-performance system, and to achieve that, we might need to consider fine-tuning. Here's another video of mine that might help answer your question: ua-cam.com/video/7VU-xWJ39ng/v-deo.html
Thanks for sharing! What Python version are you using in this demo?
EDIT: I asked this question because I had some issues with python 3.9, switching to python 3.10 did the trick! ✨
Thanks again for making this video!
I have a question: Why the correct answer is in the full context and still, the model replies with "I don't know the answer." ?! 🤔
very nice tutorial! By the way how can we make sure we only extract the important and relevant text from our own documents (like txt files or PDFs) to create nodes and relationships in Neo4j? I mean, PDFs often have a lot of extra stuff we don't care about.
This is an intriguing challenge. I am still exploring the best way to achieve this. Once I find a suitable solution, I will create a tutorial on it. I'm glad you asked about this!
Hi ! Great stuff and explanations thanks ! Do you think it is possible to the LLM to understand a non-LLM-created neo4J ? Like take any neo4j and read through it and understand it to answer questions from user ? using the same workflow ? would be soooooo awesome !
Hey, thank you! Anyway sorry, I didn't really understand what you meant by 'non-LLM created Neo4J'. Do you mean the schema (entity and relationship)? If so, then yes, you could try using predefined data in Neo4J, like the movie database. You can directly interact with it using LLM. However, here's the point:
- To achieve better results, in my experience, we still need to fine-tune the model.
- Secondly, in a real-world scenario, we would want to insert our own data, right? That's why we extract the entity and relationship from our data (e.g text, pdf) using LLM and push it to Neo4J