Chat With Websites Using ChainLit / Streamlit, LangChain, Ollama & Mistral 🧠

Поділитися
Вставка
  • Опубліковано 8 лис 2024

КОМЕНТАРІ • 18

  • @rgm4646
    @rgm4646 8 місяців тому

    This is great, and works very well! I have tried it with several 13b parameter models

  • @jofus521
    @jofus521 6 місяців тому

    Can’t wait to try this. It’s perhaps the best intro I’ve seen, especially for python noobs like me.
    The Langchain and langgraph examples are great, but the Jupyter notebook just kills me. Very painful to convert those to decent code.

  • @SantK1208
    @SantK1208 8 місяців тому

    Thanks for sharing quality contents,
    I have a query - please share some videos on creating a Q&A system with local pDFs,web pages etc with locally stored LLMs also use llmaindex and langchain.
    Thanks

    • @datasciencebasics
      @datasciencebasics  8 місяців тому

      You are welcome. Please check other videos kn this channel, you might already find the answer :)

  • @pauldelage2941
    @pauldelage2941 6 місяців тому

    Thanks so much for your tutorial! Is it possible to stream the tokens and also return the sources at the end of the response?

  • @attilavass6935
    @attilavass6935 8 місяців тому +1

    My main goal is not to chat with one or more HTML pages referred by URL(s), but entering a URL of the home of eg. an online doc, crawl, scrape and process that and chat with ALL the pages of that.

    • @Sowmya_codes
      @Sowmya_codes 2 місяці тому

      were you able to figure out that

  • @SimonMariusGalyan
    @SimonMariusGalyan 5 місяців тому

    Thank you 😊

  • @jyothhiswaroop4270
    @jyothhiswaroop4270 6 місяців тому

    May i know if url needs any authentication, like a company confluence page, how we can do in that case ?

    • @datasciencebasics
      @datasciencebasics  6 місяців тому

      For company pages like Confluence which needs authentication, you can use loaders that support Confluence. Take help from this link. python.langchain.com/docs/integrations/document_loaders/confluence/

  • @yjfeishegu
    @yjfeishegu 7 місяців тому

    Thanks!

  • @SoloJetMan
    @SoloJetMan 8 місяців тому

    what's the ideal cpu/gpu setup to run this on my pc?

    • @datasciencebasics
      @datasciencebasics  8 місяців тому

      It depends upon which model you want to use. Please take help from Ollama’s github page -> github.com/ollama/ollama

  • @Arunkumar-qf5it
    @Arunkumar-qf5it 7 місяців тому

    its taking too much time 15-20 min to get the result

    • @datasciencebasics
      @datasciencebasics  7 місяців тому

      Unfortunately, the local LLM is based on your hardware requirements. Need better hardware or you can use APIs for LLM call.