AutoGen DeepDive: Building Conversational Agents for Kubernetes!

Поділитися
Вставка
  • Опубліковано 24 вер 2024

КОМЕНТАРІ • 36

  • @CaptTerrific
    @CaptTerrific 8 місяців тому +2

    There are a LOT of channels offering ~10 minute videos diving into the most recent and powerful LLM frameworks... most offering far less impactful examples (often minimal transformations of tutorials published in the repositories themselves), with far less clear explanations, with far less fluency both in the code and their walkthroughs.
    Your presentation style is clear, concise, and dense, yet friendly and approachable :) And using Kubernetes as an example, built on top of local LLM (including explanations as to the how and why) are not only practical, but help illustrate the range of use cases beyond yet another sqlite+gpt-4 "research agent swarm!" video.
    Keep up the great work! You're going to rise to the top of these in no time!!!

    • @YourTechBudCodes
      @YourTechBudCodes  8 місяців тому +1

      Thank you so much for the kind words. I really hope my videos add value to anyone who watches it. This motivates me to keep going.

  • @suseendaran5690
    @suseendaran5690 9 місяців тому +1

    i know this channel's gonna become huge so i wanna be some of the guys that started following from the start❤

  • @tocutandrei9465
    @tocutandrei9465 9 місяців тому +3

    this is legit the best video explaining how autogen works, and i also love that you use local models. keep on doing amazing things. I would like to see what other real world use cases are there for the different types of agents

    • @YourTechBudCodes
      @YourTechBudCodes  9 місяців тому +1

      Thank you so much for the kind words. I'm planning to make videos on WebSearch and RAG soon.

  • @adpandehome996
    @adpandehome996 6 місяців тому +1

    Hey man. Good videos. You should make one on Hashicorp Nomad. Seems everybody is running behind k8s and it is overkill for most cases. New and early stage startups would benefit from a Nomad tutorial.

    • @YourTechBudCodes
      @YourTechBudCodes  6 місяців тому +1

      I kinda like that idea. Let me prepare something really quick

  • @bawbee27
    @bawbee27 7 місяців тому

    Dude this is REALLY good. Well done & thank you 👏🏽

    • @YourTechBudCodes
      @YourTechBudCodes  7 місяців тому

      I really appreciate it. Glad it was helpful.

  • @Matthias-c4p
    @Matthias-c4p 7 місяців тому

    Thanks for this video. It's readlly great. I would love to see a video about how to get the output from Autogen into a webapp, including the human input. Would great. Thanks

    • @YourTechBudCodes
      @YourTechBudCodes  7 місяців тому

      Thanks. I'm glad you found it to be helpful.
      A video to integrate all this with a web app is definitely in the works. Will share that soon.

  • @-Evil-Genius-
    @-Evil-Genius- 9 місяців тому +2

    🎯 Key Takeaways for quick navigation:
    00:00 🤖 *[Introduction and Restrictions]*
    - Setting the stage for using Oren to create AI-powered applications.
    - Three self-imposed restrictions: Open-source models only, code explanation in detail, and ensuring replicability in viewers' projects.
    - Emphasizing the commitment to using open-source models contrary to common beliefs.
    02:30 🛠️ *[Building External System Adapter]*
    - Creating an instance of an external system adapter for Kubernetes.
    - Explaining the structure of the adapter class and its get resources method.
    - Discussing the flexibility of the method parameters and the use of AI to determine values.
    04:19 🌐 *[Configuring Autogen for Kubernetes]*
    - Configuring Autogen for AI-powered interaction with Kubernetes.
    - Setting up the llama CPP inference server for better performance.
    - Adjusting parameters like cache, response timeout, and temperature for optimal AI responses.
    06:25 🤝 *[Agent Coordination and Workflow]*
    - Introducing the Kubernetes engineer agent responsible for calling the function.
    - Describing the role of the kubernetes expert agent in researching values.
    - Explaining the user proxy agent as a substitute for human input and the group chat manager for agent coordination.
    07:35 🔄 *[Agent Coordination Workflow]*
    - Detailing the workflow of agents' coordination in Autogen.
    - Explaining how the group chat manager orchestrates the conversation between agents.
    - Highlighting the role-playing game analogy used for model decision-making.
    09:36 🤔 *[Testing the Multi-Agent System]*
    - Demonstrating the interaction and coordination of agents in action.
    - Checking the logs for successful execution and agent collaboration.
    - Acknowledging the efficiency of the agents in working as a team for the intended task.
    Made with HARPA AI

  • @supernewuser
    @supernewuser 7 місяців тому +1

    well done, very underrated content

  • @mcdaddy42069
    @mcdaddy42069 9 місяців тому +1

    y ouare the best you are the best you are the best. best autogen tutorial creator out there easily

  • @shubhamnazare3525
    @shubhamnazare3525 9 місяців тому

    Thanks for explaining AutoGen!

    • @YourTechBudCodes
      @YourTechBudCodes  9 місяців тому

      Your welcome. I'm glad you found it to be helpful.

  • @ianng8243
    @ianng8243 11 днів тому +1

    I need part 2!!

    • @YourTechBudCodes
      @YourTechBudCodes  10 днів тому

      Haha. Glad you liked it. I just posted a part two last week. Do check it out and let me know your thoughts.

  • @golangNinja29
    @golangNinja29 9 місяців тому

    Cc amazing video ❤, excited for series

  • @ismailyussuf9740
    @ismailyussuf9740 8 місяців тому

    Great video and you should definetly do more please. I have a question! how good is mistral 7b at function calling? is it as accurate as openai function calling?

    • @YourTechBudCodes
      @YourTechBudCodes  8 місяців тому +1

      It really depends. You should get rich performance If you limit the number of functions per agent and provide a rich conversation history before the function is called. I exclusively use OpenHermes 2.5 for my agents which need function calling

    • @ismailyussuf9740
      @ismailyussuf9740 8 місяців тому

      @@YourTechBudCodes gotcha thanks

  • @abhishekkhanna1349
    @abhishekkhanna1349 9 місяців тому

    This is very interesting!!

  • @MrMoonsilver
    @MrMoonsilver 7 місяців тому

    Is there a possibility to run an "Autogen Inference Server" with an API? I think that could be really powerful.

    • @YourTechBudCodes
      @YourTechBudCodes  7 місяців тому

      Uhm. I'm not sure I understand the question. The inference server does set up an API.
      Or are you talking about some kind of SaaS service you can integrate with?

  • @IdPreferNot1
    @IdPreferNot1 9 місяців тому +1

    Do you have a specific requirements .yml file for the conda environment you say to setup in step 1 of you "Setup conda env" or can i just create a blank one?

    • @YourTechBudCodes
      @YourTechBudCodes  9 місяців тому

      I just realised that i made a mistake in the Readme. You don't need conda since we are using poetry. I have updated the Readme to reflect that.

  • @dekeleli
    @dekeleli 7 місяців тому

    I am trying to run this with lm studio instead of Ollama and the model just generates text instead of running the function. Maybe autogen changed something since this video got out?

    • @YourTechBudCodes
      @YourTechBudCodes  7 місяців тому

      Actually... I have written my own wrapper above ollama to power function calling. Most open source servers don't support it. Try using inferix as your server.

    • @dekeleli
      @dekeleli 7 місяців тому

      ​@@YourTechBudCodes Interesting, thank you!