Multimodal AI: LLMs that can see (and hear)

Поділитися
Вставка
  • Опубліковано 25 гру 2024

КОМЕНТАРІ • 13

  • @ShawhinTalebi
    @ShawhinTalebi  Місяць тому +6

    I'm excited to kick off this new series! Check out more resources and references in the description :)

  • @mohsinshah9933
    @mohsinshah9933 Місяць тому +5

    Hi, Shaw Talebi
    Please make some videos on LangChain, LangGraph, AI Agents
    your teaching style is best and simple

    • @ShawhinTalebi
      @ShawhinTalebi  Місяць тому +2

      Thanks for the suggestion! I added that to my list :)

  • @sam-uw3gf
    @sam-uw3gf Місяць тому +2

    great video, do videos on Lang chain and AI agents

  • @ifycadeau
    @ifycadeau Місяць тому

    WOOO 🎉 you’re back!!

  • @buanadaruokta8766
    @buanadaruokta8766 Місяць тому

    great video!

  • @mysteryman9855
    @mysteryman9855 Місяць тому

    I AM TRYING TO MAKE AN AVATAR THAT CAN CONTROL MY COMPUTER WITH OPEN INTERPRETER AND HEY-GEN LIVE STREAM A P I .

    • @ShawhinTalebi
      @ShawhinTalebi  Місяць тому

      Sounds like an awesome project! Claude's computer use capability might be helpful too: docs.anthropic.com/en/docs/build-with-claude/computer-use

  • @Ilan-Aviv
    @Ilan-Aviv Місяць тому

    Use dark mode man!!!
    I'll skip this video

    • @ShawhinTalebi
      @ShawhinTalebi  Місяць тому +1

      Thanks for the suggestion. I hadn't considered that before, but will experiment with it in future videos :)

    • @Ilan-Aviv
      @Ilan-Aviv Місяць тому

      ​@@ShawhinTalebimany if not most of developers work in low light space in dark mode. When get this white blue splash light, it kills the eyes.also blue light damage the brain for the long run.
      Just telling so you know.

  • @jonnylukejs
    @jonnylukejs Місяць тому

    I have versions of all of the above open sourced and not