Fully local tool calling with Ollama

Поділитися
Вставка
  • Опубліковано 27 вер 2024

КОМЕНТАРІ • 40

  • @automatalearninglab
    @automatalearninglab 2 місяці тому +1

    Nice! Love it! I was looking for something like this today! So glad I decided to catch up on my langchain videos! hehe Cheers!

  • @TheYoungSoul
    @TheYoungSoul 2 місяці тому +6

    Thank you for this example!! I just ran through this example using llama3.1 8B model - and it worked flawlessly. llama3 does not work - but the 3.1 model did. I actually did not expect that

  • @kyudechama
    @kyudechama 2 місяці тому +1

    Somehow I only get one tool call in my list as an answer. Even if I ask a question that would warrant multiple tool calls as a response. The ollama API is able to return multiple tool calls, openAI as well.
    I tried several models, including llama3.1, llama3, firefunctionv2 and the groq versions.
    Could it be your system prompt that prevents returning multiple function calls?

  • @davesabra4320
    @davesabra4320 2 місяці тому

    very very clearly explained. Thanks.

  • @IdPreferNot1
    @IdPreferNot1 2 місяці тому +2

    This is THE content! Please take it to the top ---> source code link for longer script?

    • @JDWilsonJr
      @JDWilsonJr 2 місяці тому

      Hello @IdPreferNot1. Apologies as I do not understand. May I trouble you for specific instructions to see the link to the notebook. I am clearly missing something. Thank you for your help.

    • @IdPreferNot1
      @IdPreferNot1 2 місяці тому

      @@JDWilsonJr Im saying its great content.. he'd make it better if he shared the source code he went through. :)

    • @LangChain
      @LangChain  2 місяці тому +4

      @@IdPreferNot1 Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb

  • @blanky0230
    @blanky0230 Місяць тому

    Still killing it

  • @jonasopina3529
    @jonasopina3529 2 місяці тому

    I'm trying to use it with ollama, but from my other compute in the same network, and can't set the base_url.
    I'm trying to set it like llm = ChatOllama(model="modelName", base_url="http::11434"...), but it doesn't work.

  • @alenjosesr3160
    @alenjosesr3160 2 місяці тому +2

    def bind_tools(
    self,
    tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]],
    **kwargs: Any,
    ) -> Runnable[LanguageModelInput, BaseMessage]:
    raise NotImplementedError()
    ollama bind tool says, not implemented.

    • @utkucanaytac5417
      @utkucanaytac5417 2 місяці тому

      use from langchain_ollama import ChatOllama not with the one with community models

  • @LyuboslavPetrov
    @LyuboslavPetrov Місяць тому

    Would be neat of NO proprietary/paid tools are used (e.g. for embedding or websearch). But, of course, no big deal to to this ourselves. Thank you

  • @BnmQwr-e2n
    @BnmQwr-e2n 24 дні тому

    Lopez Linda Anderson Carol Martinez Jose

  • @bhaibhai-qe8tt
    @bhaibhai-qe8tt Місяць тому

    response = ChatOllama(
    ^^^^^^^^^^^
    TypeError: 'method' object is not subscriptable

  • @AnthonyGarland
    @AnthonyGarland 2 місяці тому

    this is the code at about 3:48.
    from typing import List
    from typing_extensions import TypedDict
    from langchain_ollama import ChatOllama
    def validate_user(user_id: int, addresses: List) -> bool:
    """
    Validate user using historical addresses.
    Args:
    user_id: (int) the user ID.
    addresses: Previous addresses.
    """
    return True
    llm = ChatOllama(
    model="llama3-groq-tool-use",
    temperature=0,
    )
    # %%
    llm_with_tool =llm.bind_tools([validate_user])
    # %%
    result = llm_with_tool.invoke(
    "Could you validate user 123? They previously lived at "
    "123 Fake St in Boston MA and 234 Pretend Boulevard in "
    "Houston TX."
    )
    result.tool_calls

  • @ai-touch9
    @ai-touch9 2 місяці тому +1

    excellent work, as usually..:)
    can you share the code link.

    • @LangChain
      @LangChain  2 місяці тому +1

      Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb

  • @dimosdennis
    @dimosdennis Місяць тому

    It is good, but it is still not there. I did several tests where i give it two dummy tools to use, and it is able to distinguish quite effectively - however it will always call the tools, even when asked not too. Tried different prompts, no good. Still, it is better than it was, and the package is nice :)

  • @omni9796
    @omni9796 2 місяці тому

    Great video!
    Is this also available for Node?

  • @Imran-Alii
    @Imran-Alii 2 місяці тому

    Awesome!!!

  • @user-mi8gf5ez5g
    @user-mi8gf5ez5g 2 місяці тому

    could you please share a notebook link? thanks for making these videos

    • @LangChain
      @LangChain  2 місяці тому

      Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb

  • @JDWilsonJr
    @JDWilsonJr 2 місяці тому

    Hello Lance. Great presentation. Looking everywhere for your jupyter notebook. You introduce so many new concepts in your tutorials that it is almost impossible to reproduce visually from the video. I see the version you used in the video remained untitled through the end. Will you be posting the notebook in github examples like you have in the past? Your work is amazing and valuable and we are scrambling to catch up!

    • @LangChain
      @LangChain  2 місяці тому

      Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb

    • @JDWilsonJr
      @JDWilsonJr 2 місяці тому

      @@LangChain Sooo appreciate your response and the link. Keep up the great work.

  • @kuruphaasan
    @kuruphaasan Місяць тому

    3:32 I get an empty array when I run the exact same code, can you help me here? My langchain-ollama package version is 0.1.1 and I have tried both llama3-groq fine-tuned model and llama3.1

    • @BushengZhang
      @BushengZhang Місяць тому

      Yes, I have encountered the same problem!!! I'm puzzled for half a day...

    • @kuruphaasan
      @kuruphaasan Місяць тому

      @@BushengZhang I am still not able to figure out the reason, I have checked with github issues also. Not sure if it's a bug or something else.

    • @BushengZhang
      @BushengZhang Місяць тому

      Oh, I have just find a solution, I changed ollamafunction to structuralize LLM's outputs, and it workded

    • @kuruphaasan
      @kuruphaasan Місяць тому

      @@BushengZhang Oh, great. Can you please share the example code?

  • @eMotionAllDamAge_
    @eMotionAllDamAge_ 2 місяці тому

    Great content! Please, share the code 😃

    • @LangChain
      @LangChain  2 місяці тому +1

      Here it is! github.com/langchain-ai/langgraph/blob/main/examples/tutorials/tool-calling-agent-local.ipynb

    • @eMotionAllDamAge_
      @eMotionAllDamAge_ 2 місяці тому

      @@LangChain Thanks !

  • @hor1zonLin
    @hor1zonLin Місяць тому

    why i use the same code but return [ ], the empty list?

    • @hor1zonLin
      @hor1zonLin Місяць тому

      from typing import List
      from langchain_ollama import ChatOllama
      from typing_extensions import TypedDict
      def validate_user(user_id: int, addresses: List) -> bool:
      """Validate user using historical addresses.
      Args:
      user_id: (int) the user ID.
      addresses: Previous addresses.
      """
      return True
      llm = ChatOllama(
      model="llama3-groq-tool-use",
      temperature=0,
      ).bind_tools([validate_user])
      result = llm.invoke(
      "Could you validate user 123? They previously lived at "
      "123 Fake St in Boston MA and 234 Pretend Boulevard in "
      "Houston TX."
      )
      result.tool_calls
      [ ]

    • @kuruphaasan
      @kuruphaasan Місяць тому

      ​@@hor1zonLin were you able to figure out and fix the issue?

    • @BushengZhang
      @BushengZhang Місяць тому

      @@kuruphaasanhor1zonLin I have the same problem!