Super helpful series. I delved a lot in langchain source code, docs to customize already well-established examples and started these series for LangGraph it's a great format thanks a lot.
This is a super helpful series, a quick starter, easy to follow along with practical examples, thanks so many Harrison! Starting to experiment right away!
LangGraph makes creating multi-agent processes easier. SymthOS is an essential viewing for anyone interested in cutting-edge AI frameworks. #AI #MultiAgentWorkflows #SymthOS
Great video. Can you PLS create an example in langgraph on how to use an SQL database tool and call the tool with an agent. More importantly do a RAG search with it. That would be helpful
For anyone facing any error , there are the two error I have faced : 1: Make sure Matplotib is installed in your environment 2: Change the name of Chart Generator to Chart_Genenrator , this fixes an error which the regex does not recognize the name of Chart Generator.
That would be great if you add a disclaimer at the beginning of the video mentionning that it's for intermediate and advanced levels because beginners will feel lost and sometimes more confused just like me
In the supervisor example , the model returns , { "function_call" : { "arguments" : "{"next","Coder"}", "name" : "route" } How is this used to determine the next agent. Or how is Agentstate populated with next : Coder after getting the above out put from model. I can see a JsonOutputFunctionsParser. But i cant understand, how next value is determined from that
Great video. One bit things I'm trying to figure out: how can the tools access the graph state? I mean custom tools. I really need them to have context of the conversation, user session metadata, etc, and I can't past them that info if the tools are called just with the parameters filled with syntetic data generated by the llm. I'm sure there is a way to do it but can't seem to fiture it out. Thanks!
I can't make this example work using AzureOpenAI, I am receiving the error: 'create() got an unexpected keyword argument 'functions'' after executing: result = agent.invoke({"input": "what's the weather in SF?", "intermediate_steps": []}) Are Agents supported using AzureOpenAI? I have been trying different formulas but I cannot make my agents work using AzureOpenAI
I tried to implement supervisor based multi agent framework for my use case but after my workers return something to supervisor , supervisor doesn't call FINISH and get into a loop of calling the same worker again and again. Has anyone faced this issue ? or know how to fix this ?
Thank you langchain team, this helps a lot. In the example multi-agent graph all agents share the same instance of a large language model (LLM), is it possible to use different LLMs for different agents?
Hello, thank you for the vide. I have a problem. I created my own tools instead of "tavily search" and "PythonREPL" tools. I did everything same with your code. But i cant get the end token {'supervisor' : {'next' : 'FINISH'}}. why? can anyone help me pls?
I tried to implement multiagent using supervisor agent and running into issue where it couldn't parse function call. langchain_core.exceptions.OutputParserException: Could not parse function call: 'function_call
Oh no, I feel so sorry for everyone trying to use this, you don't have nearly enough basic logging functionality or debugging / investigation tooling in langchain to make using multiple prompt stages remotely tolerable, I wonder how long it will take people trying to actually implement this to solve a problem to figure it out.
The last one is far from easy to implement. I wonder if learning langgraph is worthy instead of learning how to do the same from scratch. It feels like learning a new programming language.
These videos never cease to impress me. Straight-forward and effective. Thank you, LangChain team!
Fantastic work as always. Thanks to the LangChain team.
Super helpful series. I delved a lot in langchain source code, docs to customize already well-established examples and started these series for LangGraph it's a great format thanks a lot.
are you using ai to write youtube comments?
@@tonyppe just because you read the word "delve" ;-) ? look at the last words of his comment: missing punctuation marks :)
Thank you for constantly posting such videos.
So easy to use and adopt.
This is a super helpful series, a quick starter, easy to follow along with practical examples, thanks so many Harrison! Starting to experiment right away!
Excited to get my hands dirty with langgraph. Hopping on now!
Thank you! Very helpful. I did the first 2 exercises. I'll be back for the 3rd.
Great way of explaning. Thank you. Will dive into it sometime soon.
This is incredibly beautiful
Very interesting and well explained! Thanks 👌
Bro is just a beast!
Awesome! Thank you guys for great work
LangGraph makes creating multi-agent processes easier. SymthOS is an essential viewing for anyone interested in cutting-edge AI frameworks. #AI #MultiAgentWorkflows #SymthOS
Great video. Can you PLS create an example in langgraph on how to use an SQL database tool and call the tool with an agent. More importantly do a RAG search with it. That would be helpful
Thank you for the discussion.
For anyone facing any error , there are the two error I have faced :
1: Make sure Matplotib is installed in your environment
2: Change the name of Chart Generator to Chart_Genenrator , this fixes an error which the regex does not recognize the name of Chart Generator.
That would be great if you add a disclaimer at the beginning of the video mentionning that it's for intermediate and advanced levels because beginners will feel lost and sometimes more confused just like me
In the supervisor example , the model returns ,
{
"function_call" : {
"arguments" : "{"next","Coder"}",
"name" : "route"
}
How is this used to determine the next agent.
Or how is Agentstate populated with next : Coder after getting the above out put from model. I can see a JsonOutputFunctionsParser. But i cant understand, how next value is determined from that
In conditional edges setup. You parse it to dict and then make a conditional navigation depending on 'next' value
Great video. One bit things I'm trying to figure out: how can the tools access the graph state? I mean custom tools. I really need them to have context of the conversation, user session metadata, etc, and I can't past them that info if the tools are called just with the parameters filled with syntetic data generated by the llm. I'm sure there is a way to do it but can't seem to fiture it out. Thanks!
Such a cool video!
great job! please advise how if I wanna add reward or policy into the state for further decision making sake.
Hi! Is Langchain integratable/compatible with redshift/databricks? (especially the text-to-sql framework)? Thank you.
I can't make this example work using AzureOpenAI, I am receiving the error: 'create() got an unexpected keyword argument 'functions''
after executing:
result = agent.invoke({"input": "what's the weather in SF?", "intermediate_steps": []})
Are Agents supported using AzureOpenAI? I have been trying different formulas but I cannot make my agents work using AzureOpenAI
Are there examples on how you can use multi agent workflow that doesn’t involve openAI function calling?
Thank you!
I tried to implement supervisor based multi agent framework for my use case but after my workers return something to supervisor , supervisor doesn't call FINISH and get into a loop of calling the same worker again and again. Has anyone faced this issue ? or know how to fix this ?
Thank you langchain team, this helps a lot. In the example multi-agent graph all agents share the same instance of a large language model (LLM), is it possible to use different LLMs for different agents?
yep
u can use llama3.1 with groq
Does anyone get issues where the supervisor agent loops over and over calling the sub agents?
Yes same. Have you managed to fix it?
Could you show how to bring Humans in the Loop in each architecture or any one of them?
Hello, thank you for the vide. I have a problem. I created my own tools instead of "tavily search" and "PythonREPL" tools. I did everything same with your code. But i cant get the end token {'supervisor' : {'next' : 'FINISH'}}. why? can anyone help me pls?
Continuous 1. tool... 2. tool... 1. tool. ... As the 2nd tool, it loops through my tools, but it never goes to the finish token and finishes the loop.
What is the differences between CrewAI and LangGraph ?
CrewAI is a thin wrapper around langgraph.
not work with the current version of langgraph/langhchain, issues never solved
I tried to implement multiagent using supervisor agent and running into issue where it couldn't parse function call.
langchain_core.exceptions.OutputParserException: Could not parse function call: 'function_call
How do we get access to langsmith?
DM Harrison on twitter @hwchase17 :)
How can I request access to langsmith??
hey, if you drop Harrison a message on twitter @hwchase17 he'll get you access
@@LangChain already done he was very fast to reply
hi how do i get access to langsmith?
Drop Harrison a message on twitter @hwchase17 and he'll sort you out
@@LangChain thanks! i just gotten my access!
Need access to langsmith
DM Harrison on twitter @hwchase17 :)
requesting for Langsmith access please 😊
DM Harrison on twitter for access @hwchase17 :)
Oh no, I feel so sorry for everyone trying to use this, you don't have nearly enough basic logging functionality or debugging / investigation tooling in langchain to make using multiple prompt stages remotely tolerable, I wonder how long it will take people trying to actually implement this to solve a problem to figure it out.
didn’t they make langsmith exactly for this?
@@ste7081 but it's not private :( honestly a framework should not rely on a paid service for basic functionality, I wish I could use it though
Great video! Can you hook me up with Langsmith?
The last one is far from easy to implement. I wonder if learning langgraph is worthy instead of learning how to do the same from scratch. It feels like learning a new programming language.
Great video! Can you hook me up with langsmith? :)
Shoot Harrison a message on twitter @hwchase17 :)
What's your Twitter?