Tool Calling with LangChain

Поділитися
Вставка
  • Опубліковано 9 чер 2024
  • Large Language Models (LLMs) can interact with external data sources via tool calling functionality. Tool calling is a powerful technique that allows developers to build sophisticated applications that can leverage LLMs to access, interact and manipulate external resources like databases, files and APIs.
    Providers have been introducing native tool calling capability into their models. What this looks like in practice is that when the LLM provides an auto-completion to a prompt, it can return a list of tool invocations in addition to plain text. OpenAI was the first to release this roughly a year ago with “function calling”, which quickly evolved to “tool calling” in November. Since then, other model providers have followed: Gemini (in December), Mistral (in February), Fireworks (in March), Together (in March), Groq (in April), Cohere (in April) and Anthropic (in April).
    All of these providers exposed slightly different interfaces (in particular: OpenAI, Anthropic, and Gemini, the three highest performing models, have very different interfaces). We’ve heard from the community a desire for a standardized interface for tool calling to make it easy to switch between these providers, which we’re excited to release today.
    Blog: blog.langchain.dev/tool-calling-with-langchain/
    Python:
    List of chat models that shows status of tool calling capability: python.langchain.com/docs/int...
    Tool calling explains the new tool calling interface: python.langchain.com/docs/mod...
    Tool calling agent shows how to create an agent that uses the standardized tool calling interface: python.langchain.com/docs/mod...
    LangGraph notebook shows how to create a LangGraph agent that uses the standardized tool calling interface: github.com/langchain-ai/langc...
    JS:
    List of chat models that shows status of tool calling capability: js.langchain.com/docs/integra...
    Tool calling explains the new tool calling interface: python.langchain.com/docs/mod...
    Tool calling agent shows how to create an agent that uses the standardized tool calling interface: js.langchain.com/docs/modules...

КОМЕНТАРІ • 26

  • @unhandledexception1948
    @unhandledexception1948 Місяць тому +1

    you guys really rock ! always ahead of the curve bringing innovations in this space ;-)

  • @juliustuckayo8973
    @juliustuckayo8973 Місяць тому +1

    Very clear explanation! Love Python! Keep em comming 🎉

  • @ajaybeniwal203
    @ajaybeniwal203 Місяць тому +5

    Are these updates available in typescript package as well ?

  • @SashaBaych
    @SashaBaych Місяць тому +1

    Chester, thank you for a very clear walkthrough!
    Guys, can somebody please clarify what would be the difference between just invoking the model with binded tools as opposed to creating agent with tools using a method shown in the tutorial. Especially in the context of the langgraph. I see so many tutorials on langgraph but only few of them use AgentExecutors. Do I even need to use AgentExecutors with langgraph?

  • @waneyvin
    @waneyvin Місяць тому +1

    thanks a lot for your infomation and is it compatibile to all llm? and what is the difference between bind_tools and create_react_agent, does agent think before they choose the tool?

  • @pazarazi
    @pazarazi Місяць тому

    Thanks for sharing.
    BTW, links that point to resource page of "Tool calling agent shows how to create an agent that uses the standardized tool calling interface: " are invalid.

  • @lshagh6045
    @lshagh6045 Місяць тому

    In case we needed to use an LLM model that is opensource like Llama2 I can see that you depricated the previous agent library which enables us to do so

  • @hiranga
    @hiranga Місяць тому

    @LangChain is there a way to self-heal the invalid tool args created by an LLM? Groq Mixtral appears to suffer here..

  • @brilliant332
    @brilliant332 18 днів тому

    I've had trouble with date, I'm curious if you have references to using different pydantic field types in tool calling? Any help would be great!
    My use case
    from langchain_core.pydantic_v1 import BaseModel, Field
    from datetime import date
    class CompanySobject(BaseModel):
    incorporation_date: Optional[date] = Field(None, description="Incorporation date (format: YYYY-MM-DD)")
    Exception in parsing company SObject: 1 validation error for CompanySobject
    incorporation_date: invalid datetime format (type=value_error.datetime)

  • @AngelMartinez-ge2gs
    @AngelMartinez-ge2gs Місяць тому

    Good vid. Could you elaborate on the difference between `create_tool_calling_agent()` and other agents commonly used such as create_react_agent?

    • @maruthikonjeti4572
      @maruthikonjeti4572 Місяць тому

      As per my knowledge, create_react_agent works on ReAct based agents, whereas here it's more customized to users and they can convert a normal LLM into an agent

    • @LangChain
      @LangChain  Місяць тому +2

      Some LLMs are tuned to output these "tool calls" with an expected format. This agent takes advantage of those features, whereas ReAct typically relies on prompting the LLM to follow a certain natural language pattern (e.g., "Thought: ", "Action: ", etc.).

    • @Kenykore
      @Kenykore Місяць тому

      So would this be more stable than react pattern

  • @lionelshaghlil1754
    @lionelshaghlil1754 Місяць тому

    Did you notice that when choosing the openai gpt4 as a model, the result was empty.
    Am I missing something please,
    I was implementing a similar code and faced the same problem, the result showed me empty content despite calling the function successfully, Thanks

  • @MavVRX
    @MavVRX Місяць тому +2

    Unfortunately, it doesn't work with Ollama yet :( object has no attribute 'bind_tools'

    • @nikoG2000
      @nikoG2000 Місяць тому

      How have you tried to use Ollama? Have you tried the following way:
      from langchain_community.chat_models import ChatOllama
      llm = ChatOllama(model="llama2", format="json", temperature=0)

    • @MavVRX
      @MavVRX Місяць тому

      @@nikoG2000 Yes, tried, doesn't work as the community version of Ollama has yet to implement binding functions and tools.

  • @user-vu4or4ih8p
    @user-vu4or4ih8p Місяць тому

    very nice, but how do I call and execute the tool from the JSON that was returned?

    • @LangChain
      @LangChain  Місяць тому

      LangChain's AgentExecutor is built for this: python.langchain.com/docs/modules/agents/agent_types/tool_calling/
      You can always "manually" pass the parameters back into the original tool, as well.
      We also are supporting more advanced agent workflows in Langgraph (github.com/langchain-ai/langgraph), and uploaded a cookbook on using the new tool calls with Langgraph here: github.com/langchain-ai/langchain/blob/master/cookbook/tool_call_messages.ipynb. More to come on this!

  • @lamkhatinh8344
    @lamkhatinh8344 Місяць тому

    agentExecutor = AgentExecutor(
    agent=self.chain,
    tools=self.tools,
    verbose=True,
    memory=memory,
    handle_parsing_errors=True,
    )
    I build agent with above command, can you tell me which difference between your method and my code ?

  • @abdullahsiddique6393
    @abdullahsiddique6393 Місяць тому

    First

  • @kaustuvchakraborty7372
    @kaustuvchakraborty7372 Місяць тому

    Very poor explanation,executing 6 lines of code in jyputer notebook was not expected

  • @ihateorangecat
    @ihateorangecat Місяць тому +1

    I hate Python. 🙂

    • @LangChain
      @LangChain  Місяць тому +2

      We released the same feature in JS too, in case that's more your speed! js.langchain.com/docs/modules/model_io/chat/function_calling

    • @hiranga
      @hiranga Місяць тому

      @@LangChain is this working with ChatGroq/Mixtral?

    • @sepia_tone
      @sepia_tone Місяць тому

      @@LangChain ... link that point to resource page of "Tool calling agent shows how to create an agent that uses the standardized tool calling interface: " is invalid.