Can’t wait to try this. It’s perhaps the best intro I’ve seen, especially for python noobs like me. The Langchain and langgraph examples are great, but the Jupyter notebook just kills me. Very painful to convert those to decent code.
Thanks for sharing quality contents, I have a query - please share some videos on creating a Q&A system with local pDFs,web pages etc with locally stored LLMs also use llmaindex and langchain. Thanks
My main goal is not to chat with one or more HTML pages referred by URL(s), but entering a URL of the home of eg. an online doc, crawl, scrape and process that and chat with ALL the pages of that.
For company pages like Confluence which needs authentication, you can use loaders that support Confluence. Take help from this link. python.langchain.com/docs/integrations/document_loaders/confluence/
This is great, and works very well! I have tried it with several 13b parameter models
Can’t wait to try this. It’s perhaps the best intro I’ve seen, especially for python noobs like me.
The Langchain and langgraph examples are great, but the Jupyter notebook just kills me. Very painful to convert those to decent code.
Thanks for sharing quality contents,
I have a query - please share some videos on creating a Q&A system with local pDFs,web pages etc with locally stored LLMs also use llmaindex and langchain.
Thanks
You are welcome. Please check other videos kn this channel, you might already find the answer :)
Thanks so much for your tutorial! Is it possible to stream the tokens and also return the sources at the end of the response?
My main goal is not to chat with one or more HTML pages referred by URL(s), but entering a URL of the home of eg. an online doc, crawl, scrape and process that and chat with ALL the pages of that.
were you able to figure out that
Thank you 😊
You are welcome !!
May i know if url needs any authentication, like a company confluence page, how we can do in that case ?
For company pages like Confluence which needs authentication, you can use loaders that support Confluence. Take help from this link. python.langchain.com/docs/integrations/document_loaders/confluence/
Thanks!
You are welcome !!
what's the ideal cpu/gpu setup to run this on my pc?
It depends upon which model you want to use. Please take help from Ollama’s github page -> github.com/ollama/ollama
its taking too much time 15-20 min to get the result
Unfortunately, the local LLM is based on your hardware requirements. Need better hardware or you can use APIs for LLM call.