AutoGen + Custom Model + Gemma (or Any Model), Here is the Ultimate Solution

Поділитися
Вставка
  • Опубліковано 26 вер 2024

КОМЕНТАРІ • 11

  • @madimakes
    @madimakes 6 місяців тому

    nice! just spent a week building one for another model...watching this now to see what ive missed!

    • @yeyulab
      @yeyulab  6 місяців тому +1

      Good luck! Let me know any questions when you are testing your project!

    • @madimakes
      @madimakes 6 місяців тому

      @@yeyulab thank you! currently having issues with the Groupchat and receiving function calling results (but i've successfully tackled function calling) dig in to the code due to the MSFT documentation not covering functions...

  • @ThangNguyen-ot8uz
    @ThangNguyen-ot8uz 6 місяців тому

    Sorry i'm a newbie here, What's the advantage of using autogen compared to other RAG? Even if you use bare bone Ollama, local chat with Gemma is already possible

    • @yeyulab
      @yeyulab  6 місяців тому

      Multi-agent apps like AutoGen/CrewAI can work more precisely and deterministically for a "workflow" task compared to a single-agent LLM app which you need to be extremely careful on prompt design.

  • @ZexuanWang-zx9fd
    @ZexuanWang-zx9fd 6 місяців тому

    Hi, Could you have a tutorial for it works with llava? I found out changed the model from hugging face will have different bugs depends on each model. It's really annoying...

    • @yeyulab
      @yeyulab  6 місяців тому

      Really, let me try it.

    • @ZexuanWang-zx9fd
      @ZexuanWang-zx9fd 6 місяців тому

      @@yeyulab Really appreciate it! Wait for your new video!

    • @ZexuanWang-zx9fd
      @ZexuanWang-zx9fd 6 місяців тому

      @@yeyulab The specific error I have encountered is 'LlavaProcessor' object has no attribute 'eos_token_id' and also apply_chat_template. Thanks!