LangGraph: Planning Agents
Вставка
- Опубліковано 12 лют 2024
- In this video, we will show you how to build three plan-and-execute style agents using LangGraph, an open-source framework for building stateful, multi-actor AI applications.
These agents promise the following properties relative to older “ReACT”-style agents:
⏰ Faster Execution: fewer calls to large models, and execution of tools while the LLM is still decoding
💸 Cost Efficiency: you can use smaller, domain-specific models for sub-tasks
🏆 Enhanced Performance: explicit planning forces the LLM to think about the whole trajectory
Links
-----------
Basic Plan-and-Execute
- Python: github.com/langchain-ai/langg...
- JS: github.com/langchain-ai/langg...
- Plan and solve paper: arxiv.org/abs/2305.04091
ReWOO
- Python: github.com/langchain-ai/langg...
- Paper: arxiv.org/abs/2305.18323
LLMCompiler
- Python: github.com/langchain-ai/langg...
- Paper: arxiv.org/abs/2312.04511
Developing AI applications is easier with LangSmith. Create a free account at
smith.langchain.com/
You are transforming the LLM market and its adoption. I'm a fan and we use it widely in my projects and products.
Feedback from someone who consumes project commits, even blogs and all videos: PLEASE invest in a microphone!
Congratulations on the great work over the last year.
You are a very smart and talented deveployer. I found this to be humbling, educational, and very insightful. I especially appreciated the insider deveployer aspect of the video. It was a treat get to follow the workflow and build process. Well Done! Thank you!
Thank you very much for sharing. very informative and clear
Yall are a god-send to open source 😘
Great video! I appreciate the insights. Could you clarify a bit when I should use StateGraph or MessageGraph in my projects?
Please also provide similar tutorials for langchain js, I think the community lacks there, as compared to py sdk.
How do you handle human in the loop in this case?
I have tried the llm compiler paradigm. The joiner seems to loose context of the available tools. Not sure how to handle this.
can langsmith trace native llm api calls?
The 2 last agents are wrong
ReWoo never sees the responses in the solver, we only pass the plan and the tasks but not the responses to the plan
And LLMCompiler doesn't handle task replacement, it calls everything in parallel with the tool result references instead of the data itself
Using prompts from langchain hub is a mess because the langchain documentation doesn't cover how to use them, modify them etc, and to top it off, sometimes you can't even make changes to them in the hub, because committing changes to multi modal prompts is broken
I had to reverse engineer them and then create my own class to manage prompt updates
Please show documentation on how to edit them or at least offer an alternative
all of these should be reshot for js also
This is a video of “you can see …”