How to Set Up Ollama for Seamless Function Calls with this Crazy Update

Поділитися
Вставка
  • Опубліковано 13 січ 2025

КОМЕНТАРІ • 12

  • @Storytelling-by-ash
    @Storytelling-by-ash Місяць тому +1

    ❤awesome thanks for sharing

  • @RameshBaburbabu
    @RameshBaburbabu Місяць тому +1

    Thanks for the clip and explanation. I see the fuction sometime takes number sometime it takes string, 1. how about concade "Prompt" + "Engineer" = "Prompt Engineer" , 2. Add Five + Two = 7. the place you wrote changing the char to int, kind of odd

    • @PromptEngineer48
      @PromptEngineer48  Місяць тому

      Yes. some times the model outputs the arguments as ints. sometimes as strings. In order to address the issue, what I did was to convert the strings to numbers. As of now, concading prompt and engineer wont work. You questions should be based on the functions that you have provided only

  • @TheSalto66
    @TheSalto66 Місяць тому

    If I prompt "What is sky color ?" It answer using "Calling function: subtract_two_numbers" . It seems that llama3.2 is forced to use tool also if use tool has not meaning ?

  • @invasiveca
    @invasiveca Місяць тому +1

    If I enter “hey, how's it going?” as a prompt, what result do I get?

    • @PromptEngineer48
      @PromptEngineer48  Місяць тому +1

      Oh. Ur question has to be using any one of the functions mentioned. Updates coming soon.

  • @ashmin.bhattarai
    @ashmin.bhattarai Місяць тому

    How can I give output of function back to LLM so, my final answer is from LLM instead of what function's returns.

  • @mrpocock
    @mrpocock Місяць тому +2

    I want to see a demo where the llm realises it doesn't have a tool it needs, clones a tools github project, adds the tool, commits it, and then uses it.

    • @PromptEngineer48
      @PromptEngineer48  Місяць тому

      Understood.. cool. Create tools on the fly. Nice..