LangChain v0.1.0 Launch: Retrieval
Вставка
- Опубліковано 7 січ 2024
- The best way to connect LLMs to your private data is currently retrieval augmented generation. LangChain has lots of advanced and production-ready functionality to help with this.
Jupyter Notebook (to follow along): github.com/hwchase17/langchai...
JavaScript Notebook: github.com/bracesproul/langch...
Links:
Retrieval Documentation: python.langchain.com/docs/mod...
Advanced Retrieval Methods: python.langchain.com/docs/mod...
QA with RAG Use Case Documentation: python.langchain.com/docs/use...
So appreciate the relentless refinement of the product and the documentation. It really helps.
Really appreciate all the hard work that's been done improving the documentation. This is good stuff
LangChain is underrated for this 🙏
Thank you for this
How does langchain’s retrieval compare with llamaindex’s?
May i ask a question regarding doing a chain to ask a Json file?
i find it unclear how to do a retrieval > splitting (not sure if this is necessary in JSON) > embedding > vector store and then doing a chain to invoke a Q&A regarding a Json file, the documentation only covers to the data = loader.load() step
any help would be highly appreciated!
Can I ask you why after the installation I cannot find langchain_openai?
its not installed by default, need to do `pip install langchain-openai`