Good question! Grounding behaves like RAG, but is managed by Google so you don't have to set up the vector database. In the video at 2:06 there is a comparison between RAG and function calling. Everything that I said about RAG in that chapter of the video also applies to grounding.
Martin Omander - Great episode, follow up question. that prompts that im sending are huge by design. is there anyway I can switch the prompts based on the function it should call? for e.g a prompt for weather, another complex prompt to fetch something else.
I think so. Right now the code does this: 1. Starts a new chat session, using generativeModel.startChat(). 2. Sends the user's question to the model, using sendMessage(). 3. Calls the function (like the weather API) that the model requested. 4. Sends the function return value to the model, using sendMessage(). 5. Returns the model's response to the user. The model considers the whole chat session when it generates an answer in step 4. So you could send another prompt to the model (using sendMessage()) between steps 2 and 3, and the model would take that into account in step 4. I haven't done this myself. If you try it out, let us know how it goes!
Hi Martin, I have built 30+ Gen AI apps Google LLMs and yet your videos keep me engaged and gives a new way to explain the concepts. Loved it!
Good job! Working very smooth with my Kotlin App
Great episode! I think we should train Gemini with our own data as well.
Super helpful, thank you. Extra credit for using a websocket! 👍
Can't believe this came out 2 days ago . Solve the issue I JUST ran into
How is this different from grounding and pros/cons of each (Grounding vs Function Calling)?
Good question! Grounding behaves like RAG, but is managed by Google so you don't have to set up the vector database. In the video at 2:06 there is a comparison between RAG and function calling. Everything that I said about RAG in that chapter of the video also applies to grounding.
Martin Omander - Great episode, follow up question. that prompts that im sending are huge by design. is there anyway I can switch the prompts based on the function it should call? for e.g a prompt for weather, another complex prompt to fetch something else.
I think so. Right now the code does this:
1. Starts a new chat session, using generativeModel.startChat().
2. Sends the user's question to the model, using sendMessage().
3. Calls the function (like the weather API) that the model requested.
4. Sends the function return value to the model, using sendMessage().
5. Returns the model's response to the user.
The model considers the whole chat session when it generates an answer in step 4. So you could send another prompt to the model (using sendMessage()) between steps 2 and 3, and the model would take that into account in step 4.
I haven't done this myself. If you try it out, let us know how it goes!
Would you have a python example?
🌺❤️🌺👍🇹🇭