Make An AI Agent with OpenAI’s Advanced Voice Mode

Поділитися
Вставка
  • Опубліковано 27 гру 2024

КОМЕНТАРІ • 11

  • @AdamLucek
    @AdamLucek  2 місяці тому +1

    📚To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/AdamLucek/ You’ll also get 20% off an annual premium subscription! 💡

  • @aakashchauhan5459
    @aakashchauhan5459 2 місяці тому

    very great video thanks for clearly explaning it

  • @francois-olivierhoizey7354
    @francois-olivierhoizey7354 Місяць тому

    Is it necessary to convert voice to text to launch a search in a vector database?
    Does the realtime api do it or do we have to do it through Whisper for example?

    • @AdamLucek
      @AdamLucek  Місяць тому

      The realtime api covers function calling as well, no need for intermediary steps

    • @francois-olivierhoizey7354
      @francois-olivierhoizey7354 Місяць тому

      @@AdamLucek ok thanks. Would you know how to implement that in python? function calling to search into a chromadb db and use the results for the answer?

  • @riversideautomation
    @riversideautomation Місяць тому

    Can't we get the better interface and deploy on render or replit with credentials and custom domain to access anywhere?

  • @RolandoLopezNieto
    @RolandoLopezNieto 2 місяці тому

    Great video, thanks

  • @gramnegrod
    @gramnegrod Місяць тому

    Your educational videos are a hidden gem! Thx! One question… the cost is ridiculous. I’ve played with this and watched my token count and even after they started the cache assistance (thx!) cost is still crazy high and unusable in production. I’ve not seen anyone try to address this with tight control of the context, as it seems to really climb with a continued conversation. It seems like smart use of the cache should be able to control cost. Would that help? Or we just stuck waiting on the price drop? I want to use this so bad.

    • @AdamLucek
      @AdamLucek  Місяць тому +2

      Thanks! The big issues is cost for sure for actually using this in prod. OpenAI rolled out the caching capability for helping that as you mentioned, but I feel as though it's likely we'll have to wait for price drops/other competition for it to get to a more realistic state. Hoping this goes along the same as we've seen with LLM token prices dropping