LLM-Powered Text-to-SQL with Amazon Bedrock Agent Explained

Поділитися
Вставка
  • Опубліковано 29 лис 2024

КОМЕНТАРІ • 8

  • @ghazwannamoujablak4265
    @ghazwannamoujablak4265 3 місяці тому

    Many thanks for your prompt answers. Can't wait to see the next video

    • @DenysonData
      @DenysonData  3 місяці тому

      Just uploaded the video. Curious to learn what you think

  • @ghazwannamoujablak4265
    @ghazwannamoujablak4265 3 місяці тому +1

    Great video. Could you please explain here or in a separate video the glu and metadata data extraction part

    • @DenysonData
      @DenysonData  3 місяці тому +1

      Thank you. Sure. Will do so during the coming weekend :)

  • @ghazwannamoujablak4265
    @ghazwannamoujablak4265 3 місяці тому +1

    I would be also greatful if you can share a walkthrough of how to create few shot examples

    • @DenysonData
      @DenysonData  3 місяці тому

      Yep. Will make sure to include it.

  • @ghazwannamoujablak4265
    @ghazwannamoujablak4265 3 місяці тому

    it seems like RAG (knowledge base) has not been used in the architecture, llama indes is used instead, so
    the llm model (foundation model) is building the query with the help of user NLP input + few shots examples and tables metadata, right?

    • @DenysonData
      @DenysonData  3 місяці тому +1

      Correct. There is no knowledge base, but the approach for pull tables metadata and for identifying most relevant queries is exactly the same as used in RAG-identifying similarity of the user input and various elements.