AI RoundTable
AI RoundTable
  • 13
  • 106 822
Chat with Your Database Using AskYourDatabase and LLM agents (A Review)
In this video, I review AskYourDatabase (AYD), a no-code and low-code app that allows you to chat with various databases, including MySQL, PostgreSQL, MongoDB, SQL Server, and Snowflake. This chatbot uses LLM agents built with GPT-3.5 and GPT-4 to interact with databases using natural language.
00:00 General Overview
02:54 Test prep (Sakila DB)
04:59 Connecting our DB to AYD
07:04 Adding DB documentation and extra examples to the LLMs
09:15 Testing AYD desktop app
11:46 Performance in the presence of a typo in the query
15:17 Performance in case the answer is too long
17:54 Performance with complex queries containing a chain of questions
21:34 Asking AYD to generate images
26:18 Integrating the AYD chatbot into a custom app and testing it
Resources:
AYD website: www.askyourdatabase.com/?
AYD documentation: www.askyourdatabase.com/docs?
AYD chatbot dashboard: www.askyourdatabase.com/dashboard/chatbot?
MySQL Workbench: dev.mysql.com/downloads/workbench/
Sakila DB installation: dev.mysql.com/doc/sakila/en/sakila-installation.html
ngrok installation: ngrok.com/download
#LLM #GPT #chatbot #agent #database #ai
Переглядів: 3 023

Відео

Chat and RAG with Tabular Databases Using Knowledge Graph and LLM Agents
Переглядів 12 тис.2 місяці тому
In this video, together we will go through all the steps to construct a #knowledgegraph from Tabular Datasets and design a ChatBot APP to interact with the Knowledge Graph using natural language. For this purpose, we will use Knowledge Graph LLM agents and the GPT model. We will design a Chatbot that can: 1. Chat with Graph DB using an improved LLM agent 2. Chat with Graph DB using a simple LLM...
Chat with SQL and Tabular Databases using LLM Agents (DON'T USE RAG!)
Переглядів 34 тис.2 місяці тому
In this video, together we will go through all the steps necessary to design a ChatBot APP to interact with SQL and Tabular Databases using natural language, SQL LLM agents, and GPT 3.5. We will design a Chatbot that can: 1. Chat with SQL DB that we create from SQL files 2. Chat with SQL DB that we create from CSV and XLSX files 3. Chat with SQL DB that we create by uploading documents while us...
All-In-One Chatbot: RAG, Generate/analyze image, Web Access, Summarize web/doc, and more...
Переглядів 2,8 тис.3 місяці тому
HUMAIN V1.0 is a Multi-Modal, Multi-Task chatbot project, empowered with 4 Generative AI models and was built on top of RAG-GPT and WebRAGQuery. Features: - Can act similar to ChatGPT - Has 3 RAG capabilities: RAG with processed docs, upload docs, and websites - Can generate images - Can summarize documents and websites - Connects a GPT model to the DuckDuckGo search engine (the model uses sear...
Open Source RAG Chatbot with Gemma and Langchain | (Deploy LLM on-prem)
Переглядів 4,5 тис.4 місяці тому
In this video, I show how to serve your open-source LLM and Embedding model on-prem for designing a Retrieval Augmented Generation (RAG) chatbot. For this purpose, I take RAG-GPT chatbot and instead of the GPT model, I use *Google Gemma 7B* as the LLM and instead of text-embedding-ada-002, I use *baai/bge-large-en* from Huggingface. I use Flask to develop a web server that will serve the LLM fo...
Nvidia's Free RAG Chatbot supports documents and youtube videos (Zero Coding - Chat With RTX)
Переглядів 4 тис.5 місяців тому
Chat With RTX is a free chatbot released by Nvidia. This chatbot can be used as an AI chatbot, RAG with documents, and RAG with UA-cam videos. In this video, I show how to install and use the chatbot. I also test its inference time, accuracy, and hallucination with both Mistral 7B and Llama2 13B parameter Large Language models (LLMs). Link to download the chatbot: www.nvidia.com/en-us/ai-on-rtx...
Langchain vs Llama-Index - The Best RAG framework? (8 techniques)
Переглядів 14 тис.5 місяців тому
Curious about which RAG technique suits your project best? Here I Compare two chatbots and examine eight techniques from #Langchain and #llamaindex, I'll guide you through designing a pipeline to evaluate your RAG system's performance efficiently. And that's just the beginning! We'll analyze their effectiveness across various documents and tackle over 40 questions, putting these techniques to a...
Fine-tuning Large Language Models (LLMs) | w/ Full Code
Переглядів 1,8 тис.6 місяців тому
We use a fictional company called Cubetriangle and design the pipeline to process its raw data, #finetune 3 large language models (#LLM) on it, and design a #chatbot using the best model. 00:00:17 Presentation (Key concepts) 00:02:39 #llama2 Pre-trained vs llama2-chat (fine-tuned) models 00:13:22 Fine-tuning schema 00:14:22 CubeTriangle data description 00:17:21 Data processing (first step) 00:...
ChatGPT v2.0: Chat with Websites, Search the Web, and Summarize Any Website | Chainlit Chatbot
Переглядів 1,2 тис.6 місяців тому
#WebRagQuery is a powerful #chatbot, built with #OpenAI #GPT model in #chainlit user interface, that harnesses the power of GPT #agents, #functioncalling, and #RAG to offer an enhanced conversational experience. Here's how you can make the most of its diverse functionalities: *Normal ChatGPT Interaction:* Engage in natural conversations as you would with a regular ChatGPT app, experiencing seam...
Connect GPT Agent to Duckduckgo Search Engine | Streamlit Chatbot
Переглядів 1,2 тис.6 місяців тому
*WebGPT* is a powerful chatbot, designed with #streamlit, that enables users to pose questions that require internet searches. Leveraging #GPT models: * It identifies and executes the most relevant given #Python functions in response to user queries. * The second GPT model generates responses by combining user queries with content retrieved from the #websearch engine. * The user-friendly interf...
RAG-GPT: Chat with any documents and summarize long PDF files with Langchain | Gradio App
Переглядів 26 тис.7 місяців тому
RAG stands for Retrieval Augmented Generation and RAG-GPT is a powerful chatbot that supports three methods of usage: 1. *Chat with offline documents:* Engage with documents that you've pre-processed and vectorized. These documents will be integrated into your chat sessions. 2. *Chat with real-time uploads:* Easily upload documents during your chat sessions, allowing the chatbot to process and ...
RAG explained: A Step-by-Step Guide to Vector Search and Content Retrieval
Переглядів 1,4 тис.7 місяців тому
This is the first video covering the fundamentals that we need for designing the Chatbots. We start by breaking down text embedding and its role in Retrieval-Augmented Generation (RAG) systems. Then, we put theory into practice by building a simple RAG system from scratch. 📘 Get the tutorial notebooks here: github.com/Farzad-R/LLM-Zero-to-Hundred/tree/master/tutorials/text_embedding_tutorial. 📚...
LLM-Zero-to-Hundred Introduction
Переглядів 1,7 тис.7 місяців тому
Welcome to the premiere episode of our "LLM-Zero-to-Hundred" series! Dive into the world of Large Language Models (LLMs) as we explore a variety of applications and techniques. 📚 Explore the full GitHub repo: github.com/Farzad-R/LLM-Zero-to-Hundred/tree/master 🔗 Connect on LinkedIn: www.linkedin.com/in/farzad-roozitalab/ 🌐 Visit my website: farzad-r.github.io/ AI image in the thumbnail is from:...

КОМЕНТАРІ

  • @mr.fremen
    @mr.fremen 19 годин тому

    Great video! Question: I want to use graphDB to combine my unstructed knowledge in pdfs with the constructed user data in MongoDB. My aim is that the LLM can retrieve both the necessary data and the knowledge needed to solve the problem to approach a user request. But the only problem is that the user-data is changing, is there a way to update my graphDB everytime I add/change something to my mongoDB (that my application is essentially running in)?

  • @nicolasfelipe1
    @nicolasfelipe1 День тому

    very very good tutorial in a topic which confuses many people, thanks.

    • @airoundtable
      @airoundtable 3 години тому

      I am glad that it was helpful!

  • @Soendup21-k8p
    @Soendup21-k8p 3 дні тому

    I have a realtime database. Can this still work? Thanks for the awesome video..

    • @airoundtable
      @airoundtable 2 дні тому

      Thanks! I haven't tested it in that scenario but I think it should work. As long as the DB that is getting updated is the one that is passed to the agent, the agent should be able to interact with it in real-time.

  • @Desleiden
    @Desleiden 6 днів тому

    Thank you so much for this. This is gold!!!

    • @airoundtable
      @airoundtable 5 днів тому

      I am glad the contents were helpful!

  • @Harshey_3
    @Harshey_3 6 днів тому

    Is it possible to use local llm without using api key . Using ollama mistral ?

    • @airoundtable
      @airoundtable 3 дні тому

      Yes you can use local LLMs. But the LLM must be very powerful and able to handle long context lengths. Plus, the agent itself needs to be adjusted and modified to work with the O.S LLM

  • @infraia
    @infraia 7 днів тому

    So +/- 60% accuracy from graphRAG with tabular info? Not there yet!

    • @airoundtable
      @airoundtable 7 днів тому

      I didn't understand the +-60% accuracy. But if you are referring to the accuracy of this technique, I should say this approach is probably on the frontline of retrieval techniques and it is very new. So,I expect it to become better and better over time with the advancements in the field. I also should say that between GraphRAG with tabular data and using Graph Agents to directly query the DB, I prefer the second approach and I would use agents

  • @infraia
    @infraia 7 днів тому

    Well done

  • @MuhammadAdnan-tq3fx
    @MuhammadAdnan-tq3fx 9 днів тому

    this is very awesome. I need this type of series. now waiting next video for function calling?

    • @airoundtable
      @airoundtable 8 днів тому

      Thanks. I am glad you liked the video. I removed the function calling video because OpenAI updated the functions and they changed the way the models were called. But you can have access to the code here: - github.com/Farzad-R/LLM-Zero-to-Hundred/tree/master/tutorials/LLM-function-calling-tutorial plus, I am using function calling in the following two projects: - ua-cam.com/video/KoWjy5PZdX0/v-deo.htmlsi=DN1Gt6sA8W-E4C2l - ua-cam.com/video/55bztmEzAYU/v-deo.htmlsi=kMV5ZtPaGugVckP6 And in the next video I will explain the new ways of function calling and how to design LLM agents from scratch. That project is almost ready!

  • @MuhammadAdnan-tq3fx
    @MuhammadAdnan-tq3fx 9 днів тому

    Thank you so much for such a valuable and unique series.

  • @黄迪宏
    @黄迪宏 9 днів тому

    Hi, I wonder does this system work on MacOS? I saw in your repository that this works on Linux and Windows.

    • @airoundtable
      @airoundtable 8 днів тому

      This system works on MacOS. If the provided requirements.txt file doesn't run, install the necessary libraries manually. Once you have them, you're good to go.

    • @黄迪宏
      @黄迪宏 7 днів тому

      @@airoundtable Thank you so much! I have a question regarding RAG: I am currently working on a survey data with open-ended questions and mostly user type-in answers (which means they are unstructured and may vary by personal writing/spelling/Abbreviation habits, like ai/AI/A.I, etc), and I wonder what approach would be the best for me to analyze and do Q&A with the data, especially grouping and pairing users by their answers? I know sql query is no use due to the nature of unstructured responses, and traditional RAG that don't use the model's own knowledge but solely the context (I remember seeing this in one of your video, correct me if I am wrong...) may not be helpful because I need the model to read these answers like a human not a string matching machine.

    • @airoundtable
      @airoundtable 7 днів тому

      @@黄迪宏 You're welcome! I need to know more about the source of the data. When users ask their question, what type of a data do you want to search for getting the answer? Is it tabular or semi-unstructured or fully unstructured data?

    • @黄迪宏
      @黄迪宏 7 днів тому

      @@airoundtable Its an excel file of google survey results, each column is the answer to one question and each row refers to one respondent. Most of the questions require typing instead of selecting a option, so the answers will be all unstructured strings. I would like to have the model answer my questions like who and who have similar interests in what (grouping), whose skills seems to be able to solve whose problem (pairing), who and who know the same people, etc.

    • @airoundtable
      @airoundtable 7 днів тому

      @@黄迪宏 Interesting.. I never worked on such problem. But to quickly brainstorm here a bit, if the type of questions that you want to ask from the DB are clear and you know what questions you are looking for, you can use an LLM and construct a knowledge graph from unstructured data and use a GraphAgent to query the graph DB that you created using the knowledge graph (I have a video on knowledge graph and in the final part, I explain a similar problem for a medical chatbot). With this approach you can at least answer all the questions that you mentioned in your previous reply. However, if the type of questions are not clear and it is an open ended problem and users can ask literally anything from the DB, that would make it a much harder problem to solve. Because not only you want to search the text but you also want to connect the dots between multiple possible answers and make a conclusion from them. I am not sure what would work for that scenario at the moment

  • @aussie-elders
    @aussie-elders 9 днів тому

    Really well done, probably the most complete RAG I have seen to date... I have a lot to learn form you here so thank you. One question, do you have anything explaining using local LLMs with a tool such as this?

    • @airoundtable
      @airoundtable 9 днів тому

      Thanks. I am glad you liked the video. yes, I have a video explaining how to design a RAG chatbot using Open Source LLMs, which might interest you. here is the link ua-cam.com/video/6dyz2M_UWLw/v-deo.htmlsi=XvWfkFAEL4Jt4CsX

  • @user-vu4or4ih8p
    @user-vu4or4ih8p 9 днів тому

    Thanks!

  • @shue78123
    @shue78123 10 днів тому

    Very interesting, in your opinion what are the steps to take in order to convert the sql query results into a visualized chart?

    • @airoundtable
      @airoundtable 10 днів тому

      Thanks. Add agents specifically designed for data visualization to the system. That would be my first choice since current agents are very good a retrieving data but they just was not desgined to visualize the data.

  • @delphiboy2010
    @delphiboy2010 10 днів тому

    damet garm, Thank you it was useful for me.

    • @airoundtable
      @airoundtable 10 днів тому

      Thanks! I am glad you liked the video

  • @sharadsisodiya3853
    @sharadsisodiya3853 11 днів тому

    the video was great and amazing there is one issue you use only some paid llms , also use open source llms , this will help us more

    • @airoundtable
      @airoundtable 11 днів тому

      Thanks for the suggestion. I will try to include more applications with O.S LLMs.

  • @debarghyadasgupta1931
    @debarghyadasgupta1931 12 днів тому

    How about adding some few shot examples for the agent and a bit of table reference to shrink the context window and make it more fine tuned?

    • @airoundtable
      @airoundtable 12 днів тому

      That is a great idea. Adding few shot examples and extra content can improve the performance

    • @debarghyadasgupta1931
      @debarghyadasgupta1931 10 днів тому

      @@airoundtable Today, we achieved excellent results by incorporating a few-shot learning approach, enhancing them with dynamic vectors, so eventually dynamic few-shots. We also developed a concise table description aligned with the few-shot prompts. These results are highly optimized, significantly reducing calls to OpenAI and token usage. Even with GPT-3.5-turbo, we've used fewer than 3,000 tokens so far.

    • @airoundtable
      @airoundtable 10 днів тому

      @@debarghyadasgupta1931 It is great to hear it! I really like the strategy and it is very interesting to me that the number of calls to OpenAi were reduced as well. Great job and thanks for sharing the results with me!

    • @黄迪宏
      @黄迪宏 9 днів тому

      @@debarghyadasgupta1931 Hi, good to hear your great progress! Would you mind sharing some insights about how to do it? I am interested in how to incorporate few-shot and dynamic vectors.

    • @debarghyadasgupta1931
      @debarghyadasgupta1931 8 днів тому

      @@黄迪宏 The few-shot examples were optimized using vectors. First, you need to embed all your few-shot examples. Then, when a user poses a question, you perform a vector similarity search exclusively on these few shots. Typically, Approximate Nearest Neighbors (ANN) is used for this purpose. I prefer using k=5, so among the 100 or possibly 1000 few-shot examples in your collection, you will identify the top 5 that are most relevant. Adding context from the table can further refine the query. This approach not only reduces the number of calls to the LLM API but also decreases the token usage.

  • @LIVELOVEASH645
    @LIVELOVEASH645 13 днів тому

    Excellent idea and project ,i am doing the same project in my college as part of mini project , could you guide me to accomplish this , like the main problem is open ai API key is paid one , arent there any other to use ?

    • @LIVELOVEASH645
      @LIVELOVEASH645 13 днів тому

      This is the abstract of my project:*"An Intelligent Chatbot for Attendance & Academic Support in Educa􀆟on" This study introduces an AI-driven chatbot designed for educational institutions, focusing on attendance management and academic support. Leveraging natural language processing (NLP) and machine learning (ML), the chatbot simplifies attendance tracking for educators and provides personalized academic guidance for students. It offers real-time updates on attendance records, performance metrics, and suggests tailored improvements. Additionally, serving as a communication and data visualization platform, it offers dynamic representations of statistics, enhancing decisionmaking. Overall, this intelligent chatbot streamlines administrative tasks and fosters student success in education

    • @airoundtable
      @airoundtable 12 днів тому

      Thanks! OpenAI models are not free. But you can try to build this project using opensource models from huggingface. Search langchain agents using huggingface models and you would fine some guides on how design the agents using opensource LLMs

    • @airoundtable
      @airoundtable 12 днів тому

      It is an interesting project. With a good database, you can design an agent to accomplish the goals that you mentioned in the project description.

    • @LIVELOVEASH645
      @LIVELOVEASH645 12 днів тому

      ​@@airoundtableyeah thanks , many discouraged me that I am studying 2nd year in 3rd tier btech college you couldn't do it , but I am willing to and will see where it goes , Thanks again 🙂.

    • @airoundtable
      @airoundtable 12 днів тому

      ​@@LIVELOVEASH645 Don't let anyone discourage you. If you think you can, you can.

  • @einstienn
    @einstienn 14 днів тому

    Can the project Generate plots of the data, such as histograms, scatter plots, and line plots etc., and also answer questions about the data in a comprehensive and informative way OR can it only perform statistical analysis like mean , median , standard daviation, etc. ?

    • @airoundtable
      @airoundtable 13 днів тому

      It can perform simple statistical analysis like mean, median, standard deviation, etc. I am not sure what you meant by "answer questions about the data in a comprehensive and informative way". But since the agent has access to the database schema and can also be provided with further information such as examples and details of the DB, it can have a very good understanding of the DB and would be able to answer general questions as well. It cannot plot the data and for that, you need to improve the agent's capabilities and possibly add a new agent designed for visualization tasks to the system.

  • @Sahil-te3kz
    @Sahil-te3kz 14 днів тому

    I wanted to know out of these 3 methods u showed , which one is the most less token consuming and cost effective

    • @airoundtable
      @airoundtable 14 днів тому

      Firstly, it's important to note that selecting strategies based solely on token usage isn't practical, as each strategy serves a different purpose. Generally, over the long term, LLM agents tend to consume more tokens compared to standard RAG approaches. This is particularly true when multiple agents collaborate or require frequent debugging. In RAG and similar approaches, the primary cost arises during initial data vectorization. After that, it functions as a straightforward chatbot with lower token consumption compared to agents.

  • @harrysharma3435
    @harrysharma3435 14 днів тому

    Hi I had a csv extract of 8k rows , I want to ask it questions and get responses , which will be the best among the methods u explain , and I assume I am on the right video ☺️

    • @airoundtable
      @airoundtable 14 днів тому

      Hi Harry. Most probably you asked it on the right video :)). In general, the approach that can work best very much depends on the type of questions that you would like to ask from the database. As a simple rule, if your questions can be answered by querying the csv file, then SQL agents would be the best choice. If your questions can be answered by vector search (answer and question have semantic relationship) rather than a logical query, then RAG can be the better approach. If you are not sure, test the approaches on a subset of the data and evaluate their performance. Although, my guess is that you would get the answer to your questions using logical queries, then agents would be my first suggestion.

    • @harrysharma3435
      @harrysharma3435 14 днів тому

      @@airoundtable is agent more token consuming as I am using openai

    • @airoundtable
      @airoundtable 14 днів тому

      @@harrysharma3435 Yes in general agentic systems are more token-consuming than RAG approaches. The number of agents collaborating together, the data size they have to process in every query, and the number of times they have to correct themselves to get the answer directly impact the cost of the agnetic systems.

  • @tremeregoratrix
    @tremeregoratrix 16 днів тому

    Could you explain to me how to replace azure with rocama for this example? thank you

    • @airoundtable
      @airoundtable 16 днів тому

      I didnt't understand what rocama is. I might be able to help if you can give me more info

  • @AbdulSofyan-m6z
    @AbdulSofyan-m6z 17 днів тому

    This video is so inspiring and wonderful! I really appreciate the effort you put into creating such an amazing tutorial. I was particularly impressed with the All-In-One Chatbot capabilities. I have also watched your video about Chat with SQL and Tabular Databases using LLM Agents. I do really admire your work! I believe it will be more fascinating if you could please make a tutorial on how to build a chatbot that can display images generated from a code interpreter? For example, it would be awesome to see how to create graphs or charts from prompt + csv files or SQL or tabular databases, similar to the features available in OpenAI's GPT-4. Being able to visualize data directly in a chatbot would be incredibly helpful. Thanks again for the fantastic content!

    • @airoundtable
      @airoundtable 17 днів тому

      Thanks for the kind words. I am very happy to hear the content was useful for you. I might make another video to improve the SQL agent and add more features to it. But at the moment I am working on two other videos so that one will happen somewhere down the road. In the meantime, there are some articles that you can check to get an idea of how to make agents for data visualization. Here are some examples. I hope they help dev.to/ngonidzashe/chat-with-your-csv-visualize-your-data-with-langchain-and-streamlit-ej7 medium.com/@nageshmashette32/automate-data-analysis-with-langchain-3c0d97dec356 www.reddit.com/r/LangChain/comments/1d2qqqy/building_an_agent_for_data_visualization_plotly/

  • @mikew2883
    @mikew2883 18 днів тому

    Awesome tutorial! 👍

  • @davegordon6233
    @davegordon6233 19 днів тому

    fare 31.275 at 54:40 does not seem right

    • @airoundtable
      @airoundtable 18 днів тому

      It is correct. You can find the answer to this question in row #15 of the titanic_small dataset: Pclass: 3 - Name: Mr. Anders Johan Andersson - Sex: male - Age: 39 - Siblings/Spouses Aboard: 1 - Parents/Children Aboard: 5 - Fare: 31.275

  • @johnlaw3276
    @johnlaw3276 20 днів тому

    is there a video talking about how to combine RAG, SQL agent, and Knowledge Graph?

    • @airoundtable
      @airoundtable 18 днів тому

      I don't have a video about combining them. But to combine them first there need to be a logical use case that needs a specific roadmap and mapping, since combining them together will not solve a general problem. But in case you have a usecase in mind you can definitely do it

  • @sujit5013
    @sujit5013 22 дні тому

    Can you do a series on llama-index? They have a lot of tools and it’s so different from building using langchain

    • @airoundtable
      @airoundtable 22 дні тому

      You are right, llama-index is great and has a lot of tools. I will check it again soon. My last interaction with it was for a video in which I compared llama-index and Langchain for RAG. It was a while ago and I know they have evolved the framework alot.

    • @sujit5013
      @sujit5013 22 дні тому

      Thank you. There aren’t enough llamas-index tutorials and I think you’ll explain them better than anyone! Learning so much from your videos.

    • @airoundtable
      @airoundtable 22 дні тому

      @@sujit5013 I appreciate the kind words. I am working on two videos now. But I will keep llama-index inmind and check it out later for sure. Thanks again for the suggestion!

  • @goelnikhils
    @goelnikhils 25 днів тому

    Amazing Content

  • @abirmajumder5229
    @abirmajumder5229 26 днів тому

    how do i ask follow up questions to a sql database ? Can the chatbot maintain the history and context while using a sql database agent ?

    • @airoundtable
      @airoundtable 25 днів тому

      The code that I published on github does not have a memory. But yes it can have memory. You can use Langchain memory classes and integrate them into the chatbot. For start check the following link: python.langchain.com/v0.1/docs/modules/memory/agent_with_memory_in_db/

  • @debarghyadasgupta1931
    @debarghyadasgupta1931 27 днів тому

    Based on the user prompt how we can define the autonomous flow to determine when to use SQL LLM Agent vs RAG for similarity search?

    • @airoundtable
      @airoundtable 26 днів тому

      It depends on the use case. if there is a logic that a human can understand and decide then an LLM agent on top of both with that logic being passed to it can make the decision for your users. If the logic is only a matter of whether the sql agent fails to retrieve the answer, then you can add that logic in your code as well. But if there is no specific logic behind the selection, then that can't be integrated into the chatbot.

    • @debarghyadasgupta1931
      @debarghyadasgupta1931 11 днів тому

      @@airoundtable I have seen some routing concepts recently. May be worth checking

  • @d2clon
    @d2clon 27 днів тому

    Great demo, I would like to know more of how they build the training module. I assume this is an hybrid system with a Retriver connected to a VectorDB or ElasticSearch to find training chunks, with previous success queries, and then the Text-to-SQL module to actually generate the query.

    • @airoundtable
      @airoundtable 26 днів тому

      I cannot say with certainty since I haven't seen the code behind AYD. and I am not sure what you meant by the "training chunks". But my guess is more toward the same approach that LangChain has been working on. LLMs are good at generating queries for these databases and by passing the schema of the database to the LLM it can understand how to retrieve the answer to the user's question. Next level would be to also add some necessary description of the database to the system prompt to help the LLM navigate easier. And finally, with the help of few shot learning you can do the final adjustment of the LLM on your database. If the database is huge, in a custom solution, I would break it down into smaller databases and utilize the divide and conquer strategy to achieve a good performance.

  • @usmanahmed1073
    @usmanahmed1073 29 днів тому

    Video full of Knowledge and well explained. I am looking for you channel to grow more !

    • @airoundtable
      @airoundtable 29 днів тому

      Thanks! I appreciate the kind words. I am glad that the video was helpful

  • @darkmatter9583
    @darkmatter9583 Місяць тому

    can you be my teacher? 🙏

    • @airoundtable
      @airoundtable Місяць тому

      Hi Darkmatter9583. I would be happy to help. You can go through the tutorials and ask your questions. I work on multiple projects at the moment but I will respond to questions whenever I can. In case you would like a head start in the field, send me a description of your background and what you want to accomplish. I will try to guide you in the right direction. You can send me a message on Linkedin.

  • @darkmatter9583
    @darkmatter9583 Місяць тому

    could you be my mentor?? your knowledge is awesome, i would like to learn thar wat

    • @airoundtable
      @airoundtable Місяць тому

      Hello Darkmatter9583. Thanks! I answered your other message

  • @farazmoradi6246
    @farazmoradi6246 Місяць тому

    Very interesting video, it would be great if you can make a video about Lamini too!

    • @airoundtable
      @airoundtable Місяць тому

      Thanks! Does it have any free features? As far as I remember, Lamini is a commercial product. I am not planning to go through the commercial products unless they make a lot of sound or I get a sponsorship :)) But I will keep it in mind for a potential project down the line. Thanks for the suggestion!

  • @jamesmk23
    @jamesmk23 Місяць тому

    Any thoughts on how to use SQL agent to work with multiple tables,n, say n=10-20? Can SQL agent pick the right table(s) and columns dynamically based on query?

    • @airoundtable
      @airoundtable Місяць тому

      The same agent that I designed and used in this video is dealing with multiple tables. It has access to around 11 tables and can easily get the answers. If you check the link below, you can see the diagram of the database that I am using in this video: docs.yugabyte.com/preview/sample-data/chinook/

  • @alecferguson7933
    @alecferguson7933 Місяць тому

    Thank you for this - brilliantly explained. With any of the examples you've shown, what happens if you ask a general question such as "Who is Joe Biden?". Does the LLM Agent or Knowledge Graph understand it's irrelevant to the information in the database and therefore doesn't provide an answer from its general training?

    • @airoundtable
      @airoundtable Місяць тому

      Thanks. I am glad vyou liked the video. Yes the LLM is aware that it should not use its own knowledge and only provides information that exists in the database.

  • @sujit5013
    @sujit5013 Місяць тому

    Is the code no longer available?

    • @airoundtable
      @airoundtable Місяць тому

      It is available: github.com/Farzad-R/LLM-Zero-to-Hundred/tree/master/RAG-GPT

  • @d2clon
    @d2clon Місяць тому

    Thanks for your presentation, keep going!

  • @sajinmohammed.p.e.5
    @sajinmohammed.p.e.5 Місяць тому

    I see you are creating a Vector Index at 48:56. But Are you using that anywhere? I see that you are using the embeddings created further down the program. Then I wonder why you create the Vector Index?

    • @airoundtable
      @airoundtable Місяць тому

      I am adding the vector embeddings of my data to that vector index at 49:50. Then I use that vector index for RAG in this chatbot

  • @MrAzrai99
    @MrAzrai99 Місяць тому

    Hi I'm having trouble using csv as my source document for rag in llamaindex. The result is not accurate.

    • @airoundtable
      @airoundtable Місяць тому

      Hi. if you are planning to use CSV for RAG please check out my video called "Chat with SQL and Tabular Databases using LLM Agents". The methods that I implemented in this video for llamaindex are only for PDF and text documents

  • @SonaliBhanudasMali
    @SonaliBhanudasMali Місяць тому

    New project using llm plzzzz

    • @airoundtable
      @airoundtable Місяць тому

      I am working on it. Coming soon!

  • @volkanyurtseven4802
    @volkanyurtseven4802 Місяць тому

    nice work

  • @charlene1581
    @charlene1581 Місяць тому

    If I wanted to query a very large database, what is the best method for the LLM to choose which table to query from? Currently using the Q&A pipeline, but running into problems with the LLM not making the correct SQL query and/or choosing the correct table even with a table retriever…

    • @airoundtable
      @airoundtable Місяць тому

      Update the agent strategy using this approach and check the results: python.langchain.com/v0.1/docs/use_cases/sql/large_db/

  • @muhammadtalmeez3276
    @muhammadtalmeez3276 Місяць тому

    bro can you tell which tool you used to build these flow diagrams? BTW your videos are amazing

    • @airoundtable
      @airoundtable Місяць тому

      Thanks. I use draw.io

    • @muhammadtalmeez3276
      @muhammadtalmeez3276 Місяць тому

      @@airoundtable Thanks. Just like we can ask numerical or statistical questions from thi chatbot, can we also ask questions (how and why) like why this number is high or low like explanation of an answer because LLM are good in generating text.

    • @airoundtable
      @airoundtable Місяць тому

      @@muhammadtalmeez3276 Not with this agent. This agent is designed to just retrieve information from databases. What you are looking for is another feature that needs to be added to the system for data analysis. That is a different problem and very challenging to pull out depending on the size and complexity of the database

  • @mbmckeegan6820
    @mbmckeegan6820 Місяць тому

    Can you share the openai.api_type = os.getenv("OPENAI_API_TYPE") openai.api_base = os.getenv("OPENAI_API_BASE") openai.api_version = os.getenv("OPENAI_API_VERSION") these in your env files other than the key? I can't figure the right values for those to run the code. Thank you!

    • @airoundtable
      @airoundtable Місяць тому

      What framework are you using? OpenAI from Azure or OpenAI directly?

  • @acbush
    @acbush Місяць тому

    Thank you for this. It's filled in a lot of conceptual information for me and put me on the right path. Looking forward to watching the other videos in this series!

  • @srjulianmora
    @srjulianmora Місяць тому

    Thank you for the video. I would like you to make another video explaining how to mix RAG with multiple documents and database queries. The intention of the chatbot would be to identify the need according to the user's question and, through tools functions, classify them to send the instruction whether to use RAG or SQL queries.

    • @airoundtable
      @airoundtable Місяць тому

      Thanks for the suggestion. The logic behind whether to use RAG or agents depends on the available database and the type of user's query. If the logic is known to us, it can be implemented by adding an LLM in the first step of the pipeline with the proper instruction to call two functions. One pointing to RAG and one pointing to the agent. If you are familiar with function callin, it would be a very easy addition. I also have multiple videos showing how function calling works. But I will use that technique again for sure down the line.

  • @sumitpawar000
    @sumitpawar000 Місяць тому

    Solid content as usual 🙂🚀

  • @dswithanand
    @dswithanand Місяць тому

    This is quite interesting. Thanks for sharing this. I have a couple of questions: 1. Can we integrate this chatbot with our company website? 2. Can we customize the output? For example, if a product is not available, can it display a message like "The respective product is not available" or "Sorry, I can't help you with that"?

  • @protimaranipaul7107
    @protimaranipaul7107 Місяць тому

    Thanks! Super Interested, But curious 1) We provide the connection to database but how do we ensure data privacy? 2) What are ways to connect to multiple databases. DO you have come across other open source product like this?

    • @airoundtable
      @airoundtable Місяць тому

      Thanks. For security and data privacy please have a look at AYD's blog here: www.askyourdatabase.com/blog/how-does-askyourdatabase-protect-your-data 2. If you want to connect to multiple databases individually, this is straightforward. However, if you aim to connect to multiple databases simultaneously, allowing the LLM agent to access and retrieve information from them at the same time, I didn't see that feature available. If your goal is to extract meaningful information using shared knowledge between these databases, this falls under the scope of a knowledge graph problem, which requires a tailored approach for each scenario. I personally haven't seen any open-source product like AYD. The only alternative approach that I am familiar with is to design the whole system from scratch.