How to use Gemini function calling with Cloud Run

Поділитися
Вставка
  • Опубліковано 2 лют 2025

КОМЕНТАРІ • 11

  • @babusivaprakasam9846
    @babusivaprakasam9846 8 днів тому

    Hi Martin, I have built 30+ Gen AI apps Google LLMs and yet your videos keep me engaged and gives a new way to explain the concepts. Loved it!

  • @banzai316
    @banzai316 10 днів тому +1

    Good job! Working very smooth with my Kotlin App

  • @oscarrivera4037
    @oscarrivera4037 8 днів тому

    Great episode! I think we should train Gemini with our own data as well.

  • @BartLekens
    @BartLekens 8 днів тому

    Super helpful, thank you. Extra credit for using a websocket! 👍

  • @MaxPugh-c9w
    @MaxPugh-c9w 8 днів тому

    Can't believe this came out 2 days ago . Solve the issue I JUST ran into

  • @dheer211
    @dheer211 9 днів тому +1

    How is this different from grounding and pros/cons of each (Grounding vs Function Calling)?

    • @TheMomander
      @TheMomander 9 днів тому +1

      Good question! Grounding behaves like RAG, but is managed by Google so you don't have to set up the vector database. In the video at 2:06 there is a comparison between RAG and function calling. Everything that I said about RAG in that chapter of the video also applies to grounding.

  • @Subeeshan
    @Subeeshan 10 днів тому +1

    Martin Omander - Great episode, follow up question. that prompts that im sending are huge by design. is there anyway I can switch the prompts based on the function it should call? for e.g a prompt for weather, another complex prompt to fetch something else.

    • @TheMomander
      @TheMomander 10 днів тому +1

      I think so. Right now the code does this:
      1. Starts a new chat session, using generativeModel.startChat().
      2. Sends the user's question to the model, using sendMessage().
      3. Calls the function (like the weather API) that the model requested.
      4. Sends the function return value to the model, using sendMessage().
      5. Returns the model's response to the user.
      The model considers the whole chat session when it generates an answer in step 4. So you could send another prompt to the model (using sendMessage()) between steps 2 and 3, and the model would take that into account in step 4.
      I haven't done this myself. If you try it out, let us know how it goes!

  • @yuriferreira8532
    @yuriferreira8532 8 днів тому

    Would you have a python example?

  • @Suraborisut-o8g
    @Suraborisut-o8g 10 днів тому

    🌺❤️🌺👍🇹🇭