AutoGen + Ollama + Gemma: How to Create LLM Agents Locally

Поділитися
Вставка
  • Опубліковано 26 вер 2024

КОМЕНТАРІ • 4

  • @attilavass6935
    @attilavass6935 6 місяців тому +1

    I'd love to watch content about function calling (using skills in Autogen Studio) in the same LLM setup ( Ollama / Gemma ).

    • @yeyulab
      @yeyulab  6 місяців тому +1

      Function calling (or the tool usage) is a challenge for small LLMs, but it's possible and I will do that.

  • @joebywan
    @joebywan 6 місяців тому

    What versions of modules etc are you using there? I'm seeing different output, looking like multiple attempts of the writer to contact the chat_manager before the chat_manager replies and hands off to the editor.

    • @yeyulab
      @yeyulab  6 місяців тому +1

      I am using 0.2.16 of pyautogen. Normally it's not the library issue as the embedded system prompt has not been changed through these versions, I also observed repeatedly speaking of the writer sometimes during the test and it was mostly because the manager cannot fully follow the accumulated prompt constructed by Autogen due to the model capability. You can try to refine the "description" of the two agents to make the sequence more clear.