Multimodality with Llama 3.2

Поділитися
Вставка
  • Опубліковано 3 лют 2025

КОМЕНТАРІ • 5

  • @AI-Makerspace
    @AI-Makerspace  5 днів тому +2

    Multi-Modal RAG with LangGraph and Together AI (with Retrieval Brought to you by CoPali): colab.research.google.com/drive/1VF-Ds7UXY4uYvPu5Iu5IbzDo_qDcJK2X?usp=sharing
    Event Slides: www.canva.com/design/DAGdg_on7U8/OHPjUM_WC1XHb0djeumwnA/view?DAGdg_on7U8&utlId=h87289e862a

  • @lucindalinde4198
    @lucindalinde4198 2 дні тому

    What a GIANT LEAP forward from December 2023!
    That video was how I discovered AI Makerspace.

  • @MrAhsan99
    @MrAhsan99 3 дні тому +1

    why are you using langgraph here.
    It's a simple RAG (with a very good visual retriever (colpali/colqwen) this time) but that doesn't change the fact that it is a simple RAG problem.
    You can use Langchain to have chat history but putting every thing in langgraph's graph is pretty much overkill, imo.
    Just have a simple chat template for llama vision model and put the relevant results from the retriever in it.
    Can't figure out the use of langgraph here as there is no use of external tools or llm routers here.

    • @AI-Makerspace
      @AI-Makerspace  3 дні тому +1

      We are only using LangGraph here to showcase this flow works in LangGraph, and can be plugged into any existing LangGraph graph - there is no good/other reason to have used it at all!

    • @MrAhsan99
      @MrAhsan99 3 дні тому

      @@AI-Makerspace got it!