- 6
- 29 908
Geraldus Wilsen
Приєднався 11 кві 2024
I shared my learning journey in data science and AI.
Deploy Django with Docker to DigitalOcean - Easy Server Setup!
In this video, I'll show you how to dockerize your Django app and deploy it to a DigitalOcean server
Chapter:
00:00:00 - Intro
00:01:00 - Short demo about the Django app
00:01:14 - Dockerize a Django app
00:05:09 - Set up Digital Ocean server
00:07:14 - Move our Django app to a server
00:09:55 - Test out the deployed Django app in Digital Ocean server
00:10:35 - Outro
Github: github.com/projectwilsen/kemensos_website
Chapter:
00:00:00 - Intro
00:01:00 - Short demo about the Django app
00:01:14 - Dockerize a Django app
00:05:09 - Set up Digital Ocean server
00:07:14 - Move our Django app to a server
00:09:55 - Test out the deployed Django app in Digital Ocean server
00:10:35 - Outro
Github: github.com/projectwilsen/kemensos_website
Переглядів: 377
Відео
Tool Calling Agent Simplified (Langchain Tutorial)
Переглядів 9386 місяців тому
In this video, I'll break down step by step process of how an agent is working, while also covering some important topics such as tool calling, validating input using Pydantic, error handling, adding memory, and accessing intermediate steps. Finally, I'll show you a quick demo of a fully functional chatbot based on what we've already explore together. This is an excellent starting point for any...
Automate Data Pipeline for RAG with Github Actions
Переглядів 7847 місяців тому
Data is a key aspect of a RAG system. In some cases, we want to always get the latest data. For example, if we're building a chatbot for financial reports, research, or news, we want to obtain the most recent information. However, how to automate this pipeline is rarely discussed, and that's what I want to share in this video. I'll cover how to set up an ETL pipeline, introduce you to Supabase ...
How to fine-tune LLMs using Unsloth? (text2cypher use case)
Переглядів 2,1 тис.8 місяців тому
This tutorial covered three main topics: 1. Explanation of PEFT and LoRA as fine-tuning methods. 2. How to prepare/generate our dataset used for fine-tuning? 3. How to fine-tune our model using Unsloth? How to use few shot prompting: ua-cam.com/video/KMXQ4SVLwmo/v-deo.html How to Convert any Text into a Knowledge Graph: ua-cam.com/video/ky8LQE-82xs/v-deo.html Chapter: 00:00:00 - Intro 00:01:12 ...
Convert any Text Data into a Knowledge Graph (using LLAMA3 + GROQ)
Переглядів 12 тис.8 місяців тому
I shared the most efficient way to convert any text data into a knowledge graph using LLAMA3 GROQ. It's free and straightforward. No need any GPU, you could run it in a standard device. Basic Tutorial: ua-cam.com/video/KMXQ4SVLwmo/v-deo.html Chapter: 00:00:00 - Intro 00:00:44 - How to load any text data? 00:01:35 - Data Overview 00:02:15 - Map Reduce Summarization 00:03:11 - Should we use spacy...
The easiest way to chat with Knowledge Graph using LLMs (python tutorial)
Переглядів 14 тис.9 місяців тому
What is knowledge graph? How to integrate it with LLMs? If you have the same question, this video is for you! I will explain start from the basic theory until how to set up Neo4J database, build a graph chain using Langchain, Python, and Gemini ( open source model) , until how to do prompting strategies to enhance our model's performance Chapter: 00:00:00 - Intro 00:00:31 - What is Knowledge Gr...
can you share steps.txt file??
Hi, I've just updated the video description with the repository link. Feel free to check it out!
Thank you for sharing...
@@paulntalo1425 My pleasure, hope it helpful!
Love your intro
Thank you!
why dont we just provide template to llm and fed sentences one by one to let the llm decide the entities
Hey, it depends on our use case. If we have no idea of what kind of entities we want to extract, then let llm to decide is a reasonable action. However, in most cases I've been working on, we've already had the target entities. In this case, it would be easier if we ask the llm to extract and map based on our need. It will help us in the next preprocessing step too
thanks for this amazing video please i have some suggestions if you can! * can you add domain name * can you add celery + docker * how we can host 2 or 3 dockers in same server with Nginx as reverse proxy
@@alexdin1565 Hi Alex, thank you! Sure, I've written those 3 things on my to do list. I'll make the second tutorial covering those topic once I'm free
Thank you for the very nice video. I encountered the following error executing response = chain.run(...) [ValueError: Missing some input keys: {'format_instructions'} Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...] Can you help with that. I was using LangChain 0.3.0 and now I downgraded to 0.0.39 but it's still not working
Thanks Geraldus Wilsen!!! It was very helpful video.
@@nitinkapse Thank you! Glad to know that it's helpful for you
Hi Geraldus - great post! I understand the approach of 'learning by example' through question/query pairs. I have a graph that contains semantics about relationships and descriptions of node attributes and labels. Why not train the LLM on the graph meta model itself?
Hey John, great question! I agree with you. Additionally, when it comes to production, we need a high-performance system, and to achieve that, we might need to consider fine-tuning. Here's another video of mine that might help answer your question: ua-cam.com/video/7VU-xWJ39ng/v-deo.html
Its great. We are working kn the project. And will use this. Just want a tutorial on how we can query KG using llm itself ?
Te amo
gracias!
Thank you!, I am curious in how to make neo4j show the graph, do I need to write it or it can auto generate from my data?
really underappreciated. This guy has the most easy and complete explanation of the process than others.
Thank you!
in the people query load csv with header is giving me problem. what is in data1.csv
Thanks again for making this video! I have a question: Why the correct answer is in the full context and still, the model replies with "I don't know the answer." ?! 🤔
Thanks for sharing! What Python version are you using in this demo? EDIT: I asked this question because I had some issues with python 3.9, switching to python 3.10 did the trick! ✨
Thank you for sharing this deep dive! Is there a way to find a schema to convert a book (that has chapters, and such) to KG?
Fantastic! Such great content on tool calling, agents, validation and memory in LangChain all in one video! You explained a lot of the key details that others never go into. Showing the different models performance with the Pydantic validator was very revealing. Great tip with the intermediate_steps parameter. Definite subscribe... this channel needs more views! I really hope you can make some more detailed videos like this. Anything on LangGraph or Llama index? Thanks again!
Thank you for your kind words! Yes, of course, LangGraph is in progress!
Best new channel ❤❤
Thank you!
nice video Geraldus, but you shouldnt be doing whatever processing you are doing on the audio (speeding it up?) It makes it very difficult to understand.
Hey James, thank you so much for your feedback!
@@geralduswilsen no problem Geraldus, i like your content. Glad you didnt take it persoanlly
Everybody needs to stop pushing an individual company and that company's proprietary language. You're giving Neo4j a monopoly on knowledge graphs, especially if your content is educational. Knowledge graphs are much easier to learn independent of neo4j
16:20 - Even though the cypher queries are giving correct context (answer) - the LLM is still responding with I don't know the answer. How do we fix that?
Nice question. One solution we could try to solve this problem is fine-tuning the model. Another approach is to create our own pipeline to directly pass the content retrieved from the database to the LLM. This way, we can control it more flexibly.
very nice tutorial! By the way how can we make sure we only extract the important and relevant text from our own documents (like txt files or PDFs) to create nodes and relationships in Neo4j? I mean, PDFs often have a lot of extra stuff we don't care about.
This is an intriguing challenge. I am still exploring the best way to achieve this. Once I find a suitable solution, I will create a tutorial on it. I'm glad you asked about this!
Nice demonstration of how performance can improve with in-context learning by providing examples of cypher queries in the prompts to gemini. Once we figure out how conversational digital agents can work well enough as interfaces to knowledge graphs and other open data resources, we can optimize it to work on personal devices such as smartphones. Some smartphones now have 16 GB of RAM.
Great thought John!
Thanks. I love it
Thank you Fahmi!
Hi ! Great stuff and explanations thanks ! Do you think it is possible to the LLM to understand a non-LLM-created neo4J ? Like take any neo4j and read through it and understand it to answer questions from user ? using the same workflow ? would be soooooo awesome !
Hey, thank you! Anyway sorry, I didn't really understand what you meant by 'non-LLM created Neo4J'. Do you mean the schema (entity and relationship)? If so, then yes, you could try using predefined data in Neo4J, like the movie database. You can directly interact with it using LLM. However, here's the point: - To achieve better results, in my experience, we still need to fine-tune the model. - Secondly, in a real-world scenario, we would want to insert our own data, right? That's why we extract the entity and relationship from our data (e.g text, pdf) using LLM and push it to Neo4J
Excellent effort, great explanation, nicely paced, great idea to build, and showing the process of how you got here is also great. Majority of the videos out there are how to invoke an API on OpenAI which has limited educational value but your video is amazing. Great effort!
Thank you for your kind words! Hope it is helpful!
Another interesting approach is to use a chat prompt template by Tomaz Bratanic. Check it out here: github.com/neo4j-labs/text2cypher/blob/main/finetuning/unsloth-llama3/llama3_text2cypher_chat.ipynb
Great stuff !!
Glad it was helpful!
HI, I wish you use semantic extraction tool from langchain or llamaindex, this can solve the "he" problem. also would you please tell how to add a new extracted data to merge it over existence KG data.
The manual coreference resolution part implied a challenge. Did you try some prompting to work around it
Hey! I haven't tried using other prompting strategies to solve this challenge, but I would like to explore them further next time. It's also a great point to mention that I process each line individually, which might have caused the llm to miss the context. If we process correlated sentences together (for example, sentences 1 & 2) in one run, we might obtain different results, as I believe the llm would grasp the context better in that case.
@@geralduswilsen Say if this text2graph thing worked out would you mind do a video of how to scale it up to cross-documents, even make a full blown API out of it :)
@@xuantungnguyen9719 sure, would love to do it! I'll try to explore further
Hey, this channel is dope
Thank you!
Great guide! Next would be great to consider for to integrate new information into existing KG
Thanks for the feedback! I'll keep it in mind for the next videos
Excellent!
Many thanks!
Great!
Thank you!
this seems ai genatated
Hey, hope it helpful!
Nice work Geraldus !
Thank you Pierre!
Very cool!
Thank you Tomaz!