thanks for your LangChain playlist, it's definitely one of the best resources out there! Would be awesome to have your review on custom tools and agents, I find it to be the trickiest part in LangChain but also it's greatest potential
That was extremely helpful. Could you cover custom agents? I need a way of using the response to make a decision. For example. In a multi-user chat log, if the question is directed at the chatbot respond else do nothing. Another example is an interviewer agent that has to either decide to probe for more information or go to the next question. Could you give any guidance on these kind of forking chain or custom agent type scenarios?
Will certainly make more vids around the Agents stuff. First will probably be explaining the paper behind it and then how you can manipulate the prompts. The idea of multi user would depend on a few things. Eg. does it know who is talking? eg. user1: blah blah, User2: blah blah etc
@@samwitteveenai that would be awesome, your videos are really helpful! I think the general use case I need help with is calling a chain, getting a response, and then having some decision point that calls different chains depending on the outcome of chain 1. For the multi-user, yes I would know who the message is from. If user 1 said "does anyone know who the president of the USA is?" VS "chatbot, do you know who the president is" VS "user 2 do you know...". Chain 1 would need to work out an answer could be given to statement 1 and 3, but not say anything for 2. It's not the specific use case but more how to chain prompts where decisions from one prompt need to influence which of a range of secondary prompts should be called. It's quite possible this is easy to do and I'm just not fully understanding langchain yet😊
It looks like you’ve strayed away from titling videos in this playlist with “Langchain Basics Tutorial #N” Is this a conversational buffer memory issue? 😂 but thank you for these videos! I’m really learning a lot 🙏❤️
Than you for the video, it seems that we need to use template "agent.agent.llm_chain.prompt.template" as I tried to run without it and it could not answer "how are you today" and kept looping. I thought it could try to figure this out without a template. This is the message it kept outputting "Action Input: None Observation: None is not a valid tool, try another one. Thought: I need to answer this question with a response."
Amazing video! Finally I was able to understand and utilize agents. Thank you. I'm always experimenting in using opensource models along with openai. The terminal and search agent examples you discussed with the video fail with "ValueError: Could not parse LLM output:" when tried with huggingface models. A google search revealed that models used with these agents must follow the "conversational-react-description" template. Any ideas on which models would follow this template and would allow us to use openai alternatives? Thank you
Let me look into this. I think Cohere and Anthropic models should be ok. You could also finetune a model for this. I have some more vids on Agents planned so I will add this into the mix.
Thanks. I will try it asap. How is it possible to end the conversation when it says "Finished Chain" i want to input user to Enter new prompt at this stage
Hi Sam great content. I am trying to figure out how I can build an agent that can answer questions from a) a repository of unstructured sources b) 5 to 10 SQL tables.
Indeed, you series is outstanding and is helping me build by POC. Did you ever get around to doing a video on zero-shot-react-description? I think that would help me progress on my form filling POC. Thank you very much for sharing.
No I have been wanting to make a ReACT video. I do show the Zero Shot ReACT in some other vids but need to make a ReAACT vid to explain it as it really is magic. Will try to get it out this week.
Thank you for your amazing video and all work you do. I was wondering how to use langchain to perform data analysis on one or more datasets. Let's say I have leads, sell and orders dataset. Can I use langchain to perform some analysis, such as ask which customers placed the last order? How were sales last month?
Yes you could write your own chain to do various tasks like info extraction. You could also combine it with a database to answer more traditional database questions like how were sales last month etc.
Great tutorial, thank you, Sam! The agent executor prompt can become quiet long. Is this being sent to the openai llm and costs tokens, or is it parsed by the agent? how does it work and how does it add to the economics?
Does this example use only one LLM - OpenAI? What Zero-shot-react-description does? It generates the template and then it calls OpenAI recursively by incorporating each response until the answer is found. Is that correct?
Hey Sam. First. As usual thanks for the amazing content you gently share to the community. We learn a lot here. I have a question maybe you answered already. I am struggling using agents as you did but changing GPT to a smaller model like MPT-7B. That fails into error
Forget my question. I think the model does not have capability to reason that far to handle the prompts and self reflection attached. Anyway. Still very powerful combined to openAI indeed
Very very helpful video, thank you! I have to create a chatbot that knows only my documents informations. If the question isnt about docs content it must answer "i dont know" or something else, and it does not search answers on internet. Is there a way to create a search tool only about my docs folder? And then, for questions that arent about the context, should do i create another tool?
Hi Sam, I tried your colab notebook example using the same OpenAI initialization (temp=0) and the exactly same tool/prompt/agent construction, however, when the query is not requiring to use one of the two tools like agent.run("Hi How are you today?"), the model seems to be confused and trapped in infinite loop, how can I solve this? this is the output: Entering new AgentExecutor chain... This is not a question that can be answered with a search or a calculator. Action: None Action Input: None Observation: None is not a valid tool, try another one. Thought: I need to answer this question with a response. Action: None Action Input: None Observation: None is not a valid tool, try another one. Thought: I need to answer this question with a response... and it keep repeats.
Hey, I have a question: I built an Agent which I want to host via Gradio. It works and prints out the right result. Also on my machine I see the agent executor chain. How can I also print out the agent executor chain (in the machine and in Gradio). I mean If you can tell me how to print it in the machine, I can find a way to print it in gradio!
I have a question: in some cases, I have my own data stored in SQLite. Assuming that I want the agent to access my SQLite database to query data, how can I do that?
@@samwitteveenai thanks, following your instruction, I tried to make a function. def query(input=""): sqlite_db_path = 'data/San_Francisco_Trees.db' db = SQLDatabase.from_uri(f"sqlite:///{sqlite_db_path}") db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True) result = db_chain.run(input) return result
great video, do you have any videos that explain how to use agents and handle the output in Django and fast API for realmworld applicarions? I have found an overload of information and lots of issues getting the output using chat react models also how to pass prompt templates to an agent ?
I find the best way to handle prompts is to overload them if you need to change them. most the time though you can just pass variables into the prompts when you call them. getting output should be fine as strings if you want json back etc, then you can use a custom output parser etc. I have a number of vids showing those things.
How do you give an initial goal to a langchain agent? It's fine to connect it to various tools, but I'd like to give it an initial goal/task. Any idea?
Hi. Thanks for the tutorial. Can you please help me to understand what LangChain component should I use? I need to ask a user 2 follow up questions when user asks "How to pay less taxes?" and then prompt a model the initial user question with the additional information from the answers. I can't find an example of how to use Agents for this task. Thanks. Or maybe I should use Human as a tool from Agents integrations. But I do not understand how to force to ask particular questions when particular input received.
@Sam, thanks for your great videos and the Colab Notebook code. Question: Is it possible to create an agent with a function in Python? For example: def area(diameter): return 3.14 * (diameter/2)**2 Of course, I want to use more complex functions with more variables. Thanks
you want to call the function from a LLM tool or have the model output a python function. Check out the video about PAL chain that is creating the a python function from the LLM.
This video is pretty old. The best way to do this now is to use LangChain Expression Language. I plan to make a new series with a lot of these changes soon. I already have one video about LCEL up
Hi thanks for the video, I am getting error: ValueError: Got error from SerpAPI: Invalid API key. -- I have the API key set to env and -- if I use GoogleSearch(search_params) it is working fine. Thanks in advance.
Hi Sam, Thanks for this video. Can we use langchain tools with Wizardllm or any other llm model without needing an openai key? because that would be great
In regards to using agents and tools: Is there a way to just use the llm for the initial prompt and then if the llm can’t answer it reverts to tools? I keep running into the issue where the agent decides to use a search tool rather than just use open ai’s llm (gpt-3.5) to try to answer the initial question.
You can get around this my changing the prompt and the tool description so that it only uses the search as a last resort etc. You will need to experiment with the prompt etc.
Hi, this material is great. I have a follow up question- would you know how can I set up an agent to ask follow up questions as it needs the answers from the user to decide how to best proceed? For example: The user may ask a chatbot a question that can be theoretically answered by running a query in a SQL database, that the agent can access. However, dependent on the nature of the question, I would like to have the agent ask pre-programmed follow up questions, so that the agent use that extra information to run a more precise query. Any thoughts or pointers would be much appreciated. Thanks.
I found that such questions like "How old is the current President?", the result got back is wrong, based on it calculates current day as Jun x 2021. Weird, because if I asked "what is the date today?", I got the correct answer. Google's search (tool: "google-search") was used.
@@samwitteveenai but I can't use the writefile tool in zeroshot react agents Or chat agents. Could you please help which agent can support write file tool. And I want the total crawled information not the summary or any specific short answer.
No bullshit here straight to the point, clearly the goal is to improve viewers understanding of the subject first, really like your brand
Thanks much appreciated
thanks for your LangChain playlist, it's definitely one of the best resources out there!
Would be awesome to have your review on custom tools and agents, I find it to be the trickiest part in LangChain but also it's greatest potential
Glad you liked it. Yes I will don more with the Agents going forward and also plan to show making custom tools etc.
@@samwitteveenai Great to know! How's these related video going?
Love your LangChain series , Sam 💪💪💪
Sam, thats precisely done. Thanks.
Very helpful. Good deep dives on this topic are still rare. Thanks.
Thanks!
Thanks for the great help regarding the clarification of the Langchain concepts!
Thanks for this very useful and descriptive video. I will start learning Lang chain
Outstanding Sam! Please keep them coming! 🎉
Great tutorial ! I can finally make sense of Tools.
Another great video!!!Thank you.
Just wowed at your tutorial. Thank you!
Truly helpful. Many thanks
Would be great if we can have a video on the Custom Agents
Thank you so much for your videos, they are absolutely brilliant, really helped me
This is certainly in the plans
That was extremely helpful. Could you cover custom agents? I need a way of using the response to make a decision. For example. In a multi-user chat log, if the question is directed at the chatbot respond else do nothing. Another example is an interviewer agent that has to either decide to probe for more information or go to the next question. Could you give any guidance on these kind of forking chain or custom agent type scenarios?
Will certainly make more vids around the Agents stuff. First will probably be explaining the paper behind it and then how you can manipulate the prompts. The idea of multi user would depend on a few things. Eg. does it know who is talking? eg. user1: blah blah, User2: blah blah etc
@@samwitteveenai that would be awesome, your videos are really helpful! I think the general use case I need help with is calling a chain, getting a response, and then having some decision point that calls different chains depending on the outcome of chain 1. For the multi-user, yes I would know who the message is from. If user 1 said "does anyone know who the president of the USA is?" VS "chatbot, do you know who the president is" VS "user 2 do you know...". Chain 1 would need to work out an answer could be given to statement 1 and 3, but not say anything for 2. It's not the specific use case but more how to chain prompts where decisions from one prompt need to influence which of a range of secondary prompts should be called. It's quite possible this is easy to do and I'm just not fully understanding langchain yet😊
Really helpful video. Thank you!
Excellent stuff!
Great stuff, i really appreciate it!
excellent.
you should have a master class.
Yeah I am thinking of launching something like that. what would you like to see in it?
Pretty amazing great work sam😊
Great stuff !!
This is just mind blowing
It looks like you’ve strayed away from titling videos in this playlist with “Langchain Basics Tutorial #N”
Is this a conversational buffer memory issue? 😂 but thank you for these videos! I’m really learning a lot 🙏❤️
Hi Sam would appreciate if you could upload a video on router chains
Than you for the video, it seems that we need to use template "agent.agent.llm_chain.prompt.template" as I tried to run without it and it could not answer "how are you today" and kept looping. I thought it could try to figure this out without a template. This is the message it kept outputting
"Action Input: None
Observation: None is not a valid tool, try another one.
Thought: I need to answer this question with a response."
Hi Sam, great work! Very helpful for me. I am curious about how folks think about the potential of LangChain!
How can you add custom tools with the specfied tool like serp, wiki, terminal and such and how can you alter the prompt template
thanks, this helps me a lot!
legendddddddd
Amazing video! Finally I was able to understand and utilize agents. Thank you.
I'm always experimenting in using opensource models along with openai. The terminal and search agent examples you discussed with the video fail with "ValueError: Could not parse LLM output:" when tried with huggingface models. A google search revealed that models used with these agents must follow the "conversational-react-description" template. Any ideas on which models would follow this template and would allow us to use openai alternatives? Thank you
Let me look into this. I think Cohere and Anthropic models should be ok. You could also finetune a model for this. I have some more vids on Agents planned so I will add this into the mix.
Thanks. I will try it asap. How is it possible to end the conversation when it says "Finished Chain" i want to input user to Enter new prompt at this stage
Can you explain zero-shot-react-description what's is that?
Hi Sam great content. I am trying to figure out how I can build an agent that can answer questions from a) a repository of unstructured sources b) 5 to 10 SQL tables.
Can we have the agent run a mandatory tool and decide on some others tools ?
Indeed, you series is outstanding and is helping me build by POC. Did you ever get around to doing a video on zero-shot-react-description? I think that would help me progress on my form filling POC. Thank you very much for sharing.
No I have been wanting to make a ReACT video. I do show the Zero Shot ReACT in some other vids but need to make a ReAACT vid to explain it as it really is magic. Will try to get it out this week.
Great vids, all of'em. How did you implement the "terminal" tool tho?
Thank you for your amazing video and all work you do. I was wondering how to use langchain to perform data analysis on one or more datasets. Let's say I have leads, sell and orders dataset. Can I use langchain to perform some analysis, such as ask which customers placed the last order? How were sales last month?
Yes you could write your own chain to do various tasks like info extraction. You could also combine it with a database to answer more traditional database questions like how were sales last month etc.
Great tutorial, thank you, Sam!
The agent executor prompt can become quiet long. Is this being sent to the openai llm and costs tokens, or is it parsed by the agent? how does it work and how does it add to the economics?
Great question! I am also curious about this.
Very good , can we get this same into langserve to serve as an api
Does this example use only one LLM - OpenAI? What Zero-shot-react-description does? It generates the template and then it calls OpenAI recursively by incorporating each response until the answer is found. Is that correct?
Hi Sam, Can we add chat memory buffer to agent for chat continuation?
do you have find answer this question
What if I want to use a custom tool that i made using a retriever
i loaded tools as
tools = load_tools(["serpapi", ])
but it throws an error
How can one return source_documents while working with agents having multiple tools?
Hey Sam. First. As usual thanks for the amazing content you gently share to the community. We learn a lot here. I have a question maybe you answered already. I am struggling using agents as you did but changing GPT to a smaller model like MPT-7B. That fails into error
Forget my question. I think the model does not have capability to reason that far to handle the prompts and self reflection attached. Anyway. Still very powerful combined to openAI indeed
I am working on some things for testing open LLMs on reasoning and tasks like this and may also release a model for it.
@@samwitteveenai that is great! Eager to see the result of your exploration. If we can help you in any manner, feel free.
howd you get the sick documentation background
Minute 2:44, taking about zero-shot-react-etc.. you talk about a previous video and papaer. Could you point me out what is this paper name ?
Very very helpful video, thank you! I have to create a chatbot that knows only my documents informations. If the question isnt about docs content it must answer "i dont know" or something else, and it does not search answers on internet. Is there a way to create a search tool only about my docs folder? And then, for questions that arent about the context, should do i create another tool?
Hi Sam, I tried your colab notebook example using the same OpenAI initialization (temp=0) and the exactly same tool/prompt/agent construction, however, when the query is not requiring to use one of the two tools like agent.run("Hi How are you today?"), the model seems to be confused and trapped in infinite loop, how can I solve this?
this is the output:
Entering new AgentExecutor chain...
This is not a question that can be answered with a search or a calculator.
Action: None
Action Input: None
Observation: None is not a valid tool, try another one.
Thought: I need to answer this question with a response.
Action: None
Action Input: None
Observation: None is not a valid tool, try another one.
Thought: I need to answer this question with a response... and it keep repeats.
Hey, I have a question:
I built an Agent which I want to host via Gradio. It works and prints out the right result. Also on my machine I see the agent executor chain. How can I also print out the agent executor chain (in the machine and in Gradio). I mean If you can tell me how to print it in the machine, I can find a way to print it in gradio!
Is there a way we can combine the RetrievalQAchain and Tools using Agents
I have a question: in some cases, I have my own data stored in SQLite. Assuming that I want the agent to access my SQLite database to query data, how can I do that?
There is an Agent in LangChain that can do SQL queries. That should work.
@@samwitteveenai thanks, following your instruction, I tried to make a function.
def query(input=""):
sqlite_db_path = 'data/San_Francisco_Trees.db'
db = SQLDatabase.from_uri(f"sqlite:///{sqlite_db_path}")
db_chain = SQLDatabaseChain(llm=llm, database=db, verbose=True)
result = db_chain.run(input)
return result
great video, do you have any videos that explain how to use agents and handle the output in Django and fast API for realmworld applicarions? I have found an overload of information and lots of issues getting the output using chat react models also how to pass prompt templates to an agent ?
I find the best way to handle prompts is to overload them if you need to change them. most the time though you can just pass variables into the prompts when you call them. getting output should be fine as strings if you want json back etc, then you can use a custom output parser etc. I have a number of vids showing those things.
How do you give an initial goal to a langchain agent? It's fine to connect it to various tools, but I'd like to give it an initial goal/task. Any idea?
Hi. Thanks for the tutorial. Can you please help me to understand what LangChain component should I use? I need to ask a user 2 follow up questions when user asks "How to pay less taxes?" and then prompt a model the initial user question with the additional information from the answers. I can't find an example of how to use Agents for this task. Thanks. Or maybe I should use Human as a tool from Agents integrations. But I do not understand how to force to ask particular questions when particular input received.
Your voice is similar to Stewie Griffin from The Family Guy :D :P
How to extract the Agent Executor chain thought processes and pass it to a FastAPI call?
Take a look at the custom agent /tools vids and ReACT
@Sam, thanks for your great videos and the Colab Notebook code.
Question: Is it possible to create an agent with a function in Python?
For example:
def area(diameter):
return 3.14 * (diameter/2)**2
Of course, I want to use more complex functions with more variables.
Thanks
you want to call the function from a LLM tool or have the model output a python function. Check out the video about PAL chain that is creating the a python function from the LLM.
My problem is: how do i combine Agents and sequentialchains. i have been looking on the internet, but nothing .
This video is pretty old. The best way to do this now is to use LangChain Expression Language. I plan to make a new series with a lot of these changes soon. I already have one video about LCEL up
Hi, when the agent decide to switch to another tool? Is there any score threshold as a reference
Hi thanks for the video, I am getting error: ValueError: Got error from SerpAPI: Invalid API key. -- I have the API key set to env and -- if I use GoogleSearch(search_params) it is working fine. Thanks in advance.
@Sam Witteveen Hello Sam, Can we change the prompt associated with agents? If yes then let me know how to do that.
Yes totally I go through that in a number of the early videos.
@@samwitteveenai I am checking out your video's
Hi Sam, Thanks for this video. Can we use langchain tools with Wizardllm or any other llm model without needing an openai key? because that would be great
I showed this in one of the vids last week. I will show some more non Open AI options in another video soon.
How can I get it to use the open ai llm first before trying the tools?
you could make a normal LLM chain and put that in a combined chain.
Хай, нужна помощь ( на Термуксе через Кали Линукс, ЛЛ АГЕНТ ВЫДАЁТ ОШИБКУ ПРИ ПОДКЛЮЧЕНИЕ К ИСКУСВСТУННЕМУ ИНТЕЛЛЕКТА)... КАК....? КАК РЕШИТ ЗАДАЧУ ?
In regards to using agents and tools:
Is there a way to just use the llm for the initial prompt and then if the llm can’t answer it reverts to tools? I keep running into the issue where the agent decides to use a search tool rather than just use open ai’s llm (gpt-3.5) to try to answer the initial question.
You can get around this my changing the prompt and the tool description so that it only uses the search as a last resort etc. You will need to experiment with the prompt etc.
Hi, this material is great.
I have a follow up question- would you know how can I set up an agent to ask follow up questions as it needs the answers from the user to decide how to best proceed?
For example: The user may ask a chatbot a question that can be theoretically answered by running a query in a SQL database, that the agent can access.
However, dependent on the nature of the question, I would like to have the agent ask pre-programmed follow up questions, so that the agent use that extra information to run a more precise query.
Any thoughts or pointers would be much appreciated.
Thanks.
@ MrJvr80 did you find the solution for that ? if yes, then can you please share it,
@@MindsMusing did you find the solution for that ? if yes, then can you please share it,
@@alperenyuksel7184 I didn't not find the solution
I found that such questions like "How old is the current President?", the result got back is wrong, based on it calculates current day as Jun x 2021.
Weird, because if I asked "what is the date today?", I got the correct answer. Google's search (tool: "google-search") was used.
How can we write the web crawled information in a file using this agent and tool? Please help
there is a write file tool, check that out.
@@samwitteveenai but I can't use the writefile tool in zeroshot react agents Or chat agents. Could you please help which agent can support write file tool. And I want the total crawled information not the summary or any specific short answer.
How do you make an vectorDb a tool so it can be in the tool list?
checkout my video on custom tools
What's the paper about getting LLMS to take actions and generate steps?
The main one is the ReAct paper - arxiv.org/pdf/2210.03629.pdf I plan to make a vid on this and some similar research.
@@samwitteveenai cheers!
Seems to be similar to HuggingGPT.
how do i setup search engine api key?
Langchain docs should have a link to that from memory.