NLP & AI database integration | Get Insights from database using NLP | Chat with database | AI | NLP

Поділитися
Вставка
  • Опубліковано 6 жов 2024

КОМЕНТАРІ • 12

  • @GordonShamway1984
    @GordonShamway1984 3 місяці тому +1

    Wonderful as always and just in time. Was going to build a similar use case that auto generates database docs for business users next week. This comes in handy🎉
    Thank you again and again

    • @BiInsightsInc
      @BiInsightsInc  3 місяці тому

      Glad it was helpful! Happy coding.

  • @mohdmuqtadar8538
    @mohdmuqtadar8538 3 місяці тому +1

    Great video
    What if the response from database exhaustes the context window of the model.

    • @BiInsightsInc
      @BiInsightsInc  3 місяці тому +1

      Thanks. If you are encountering model's maximum context length then you can try the following.
      1. Choose a different LLM that supports a larger context window.
      2. Brute Force Chunk the document, and extract content from each chunk.
      3. RAG Chunk the document, only extract content from a subset of chunks that look “relevant”.
      Here an example of these from LangChain.
      js.langchain.com/v0.1/docs/use_cases/extraction/how_to/handle_long_text/

  • @michaelaustin1638
    @michaelaustin1638 2 місяці тому +1

    Awesome video! How did you get the various categories when creating a model?

    • @BiInsightsInc
      @BiInsightsInc  2 місяці тому

      Thanks. Those are defaults in the OpenWebUI. You can select relevant categories for a custom model.

  • @mahraneabid
    @mahraneabid 2 місяці тому

    when he said "would you like me to break down the sales by product" and you responded with yes will do the action that he mention or will not?

    • @BiInsightsInc
      @BiInsightsInc  Місяць тому

      It may work if the SQL model is able to generate sql for the question. You can try it and let us know if this extended option works.

  • @mahraneabid
    @mahraneabid 2 місяці тому

    hi sir the edited model cant be seen by ollama, when I call ollama list in CMD its display only the ollama3.1, why?

    • @BiInsightsInc
      @BiInsightsInc  Місяць тому

      If you do not see the custom model in your ollama ecosystem then check the model file to make it's correct. Here is an example of the custom model file from openwebui. openwebui.com/m/darkstorm2150/Data-Scientist:latest

  • @krishnarajuyoutube
    @krishnarajuyoutube 2 місяці тому

    can we run llama 3 locally on any simple VPS Server, or do we need GPUS ?

    • @BiInsightsInc
      @BiInsightsInc  2 місяці тому

      Hi you'd need a gpu to run llm. By the way VPS servers can have GPUs.