6-Building Advanced RAG Q&A Project With Multiple Data Sources With Langchain
Вставка
- Опубліковано 27 гру 2024
- Hello All we are going to build Advanced RAG Projects With Multiple Data Sources as arxiv,wikipedia and others .Here we will be learnign about agents,tools,toolkits and agent executor
Code Github: github.com/kri...
---------------------------------------------------------------------------------------------
Support me by joining membership so that I can upload these kind of videos
/ @krishnaik06
-----------------------------------------------------------------------------------
Fresh Langchain Playlist: • Fresh And Updated Lang...
►LLM Fine Tuning Playlist: • Steps By Step Tutorial...
►AWS Bedrock Playlist: • Generative AI In AWS-A...
►Llamindex Playlist: • Announcing LlamaIndex ...
►Google Gemini Playlist: • Google Is On Another L...
►Langchain Playlist: • Amazing Langchain Seri...
►Data Science Projects:
• Now you Can Crack Any ...
►Learn In One Tutorials
Statistics in 6 hours: • Complete Statistics Fo...
End To End RAG LLM APP Using LlamaIndex And OpenAI- Indexing And Querying Multiple Pdf's
Machine Learning In 6 Hours: • Complete Machine Learn...
Deep Learning 5 hours : • Deep Learning Indepth ...
►Learn In a Week Playlist
Statistics: • Live Day 1- Introducti...
Machine Learning : • Announcing 7 Days Live...
Deep Learning: • 5 Days Live Deep Learn...
NLP : • Announcing NLP Live co...
---------------------------------------------------------------------------------------------------
My Recording Gear
Laptop: amzn.to/4886inY
Office Desk : amzn.to/48nAWcO
Camera: amzn.to/3vcEIHS
Writing Pad:amzn.to/3OuXq41
Monitor: amzn.to/3vcEIHS
Audio Accessories: amzn.to/48nbgxD
Audio Mic: amzn.to/48nbgxD
One of the best langchain series. Thanks God I am able to find such good content
Instead of openai, plz use any other downloadable open source LLM which can be run locally 😊
Very nice tutorial . Very helpful. Please add a commentary to include Langchain API key when using prompts from hub. I am learning a lot , thanks so much Krish
Thank you sir .Great videos you are making............
your videos are amazing... state of the art
simple and best Langchain series, keep up the good work.👏
Sir understood the complete flow. Great explaination. ❤❤
Amazing information Krish.. Thanks for making thi series.
Wow this is what i wanted!
can i use the same code with opensource model by just changing the way it loads sir?
I wrote email to you yesterday and the video came today! This is next level Krish sir! Thankyou ❤!
yes, you can use Ollama with Llama3. I did same thing with open source.
Very comprehensive and super helpful!
God bless you , sir!
Thanks for introducing and agents
U can use Lllama 2 or other opensource api insteaddddddddddddddddddddddddddddddddddddddddd....
Krish fantastic work. Can you explain it keeping ollama in context
wiating for next video
This video is very comprehensive and easy to understand, really grateful for your efforts sir, However, Could you please create a session on how to achieve the function calling , tools and agents using Gemini Pro or any other Open source LLM, unfortunately, there is no alternative for the open AI version (create_openai_tools_agent). Please explain us the workaround to use other LLMs.
exactly sent one full day wasting on this and reading hell lot of documentations although gained a lot of knowledge
i found a function using gemini which a person wrote as no a agent formation tool for ollama which support chat generation using different tools and ollama together at the same time
def process_user_request(user_input):
# Parse user input for potential tool usage
if "{" in user_input and "}" in user_input:
# Extract tool name and arguments
tool_call = user_input.split("{")[1].split("}")[0]
tool_name, arguments = tool_call.split(":")
arguments = eval(arguments)
# Find the corresponding tool function
for tool in tools:
if tool.__name__ == tool_name:
# Execute the tool with user arguments
tool_output = tool(arguments)
return tool_output
# User request doesn't involve a tool, respond normally
return f"I understand, but I can't use a tool for this request. {user_input}"
while True:
# Get user input
user_input = input("User: ")
# Process user request and generate response with Llama 2
response = model.generate(
input_ids=model.tokenizer.encode(prompt.format(user_input=user_input, list_of_available_tools="
* ".join([t.__name__ for t in tools]))),
max_length=1024,
num_beams=5,
no_repeat_ngram_size=2,
early_stopping=True
)
# Extract and format the generated response
generated_text = model.tokenizer.decode(response[0]["generated_tokens"], skip_special_tokens=True)
tool_output = process_user_request(user_input)
final_response = generated_text.replace("{generated_response}", tool_output)
# Print the final response to the user
print(final_response)
there is something called binding of func which i could not understand shit
I used create_openai_tools_agent with llama 3 and it worked fine. I think you can use it and even that loaded prompt from the hub is for openai model but it worked fine with llama 3 70b model.
@@aj.arijit you can use create_react_agent instead of create_openai_tools_agent to prepare agent with Ollama & llama3.2 model. I build the same thing with open source model
Hello krish ,
I had done google palm2 using pdf bot , but it's not gave response properly because prompt it will working publicly. How to handle prompt for specific dataset.please reply me..
Pls don't use OpenAI in this project...as it's API is paid so we can't access it....instead use gemini or any other open source model...so that we can also try it at our end...
Not gemini...as they are removing free features quicklu
@@datatalkswithchandranshu2028 okk..
Break your training data into chunks size less than token limit so that you can use free version even for big data...
Switch the LLM in langchain
That is somewhat complicated and we need help for that only from Krish sir @@theinhumaneme
Amazing videos, Krisk! I have a question that you haven't covered yet. After getting results from the similarity search in RAG mode, you attach them to the prompt and send them to the LLM model. Given the character limit when querying the LLM, what approaches do you take if this limit is exceeded? Please explain this or create a video with code on this topic.
Sir can you please elaborate on using neo4j knowledge graph to build RAG application
Hi, very interesting video! How do I get the documents and their metadata returned by the retriever? I would like to show, for example, the Wikipedia links or articles to the user related to the answer.
I tried adding an SQLDatabase tool to the tools list. I got an error because i think the QuerySQLDataBaseTool is not really returning a tool. What am i suppose to do if i want to add an sqlDatabase to the following list of tools without any error, kindly help.
Hi Krish, if we use "create_conversational_retrieval_agent", how do we pass the prompt - Is prompt mandatory?
amazing - Thank you
Hi, Can you make a video on multitenancy using agents and tools?
hey krish, i develop an app using rag for qc of manually populated data in excel but the model is not performing with accuracy i used llama2, is there any other best athematic open source llm?
thank you so much
Great video. Any alternatives to Langsmith?
Is this updated langchain content available in your udemy course? Or is the udemy course in need of updates?
Sir please make a video on virtual car assistant using LLMs
You're awesome.
Pdf upload agen not implemented right
Hi Krish, can you make a video on conversational chatbot trained on own datasource
How can we extract multi columnar tabular data , especially from images,
Hi @Krish Naik, i saw many videos on Generative AI, but i feel that there are many missing connections from basic level understanding to coding understanding, I thinking Everyone is capable to loading the libraries and use classes and get the code done. Also provide the basic core concepts also about Prompts, chat models, Tools, agent, memory, chains with their types and where to use them using coding. The basic knowledge in these videos are broken in different parts, time and space. Hope you will find this comment.
Learn from the blogs posted on websites from their it will be easy to understand things like agents,Tools etc
@@vinayaksharma3650 Hi, thanks for the suggestion. I read and tried to understand, but some of the concept are high overview that need to be understandable. My meaning for the above comment was, if someone is explaining the things which are already explained, so it means the explanation should be like more than from the document in a easy way.
Hi Krish,
Can you please create Vedio on langgraph??
Hi dear friend .
Thank you for your efforts .
How to use this tutorial in PDFs at other language (for example Persian )
What will the subject ?
I made many efforts and tested different models, but the results in asking questions about pdfs are not good and accurate!
Thank you for the explanation
How to get the accuracy of our search with implementation?
Great video. Ignore the comments on not using OpenAI, if they don’t want to pay they wouldn’t be the ones to develop actual apps anyway
Man I thought you got your hair back! 😂😂😂❤❤❤
I am learning Stats, sql and ML from miscellaneous videos. I want to start with a clean course . Which one is better to pursue data science career ? IBM Machine Learning or Google Advanced Data Analytics?
Find a roadmap and follow it loosly but not a big reroute.
Then find a person who teaches the concepts in a way you can absorb it. And practice if they recommend or not.
For example, I found these people very much as per my taste.
- Codebasics, Krish for ML concepts with analogies and practical implementation.
- StatQuest for visual interpretation and understanding of Statistical concepts.
I hope it would be useful for you.
Hey guys, is anyone else having an issue on the invoke call when using Ollama (llama2 and llama3). I get the following error: ValueError: Ollama call failed with status code 400. Details: {"error":"invalid options: tools"}
exactly spent one full day wasting on this and reading hell lot of documentations although gained a lot of knowledge
i found a function using gemini as no a agent formation tool for ollama which support chat generation using different tools and ollama together at the same time
def process_user_request(user_input):
# Parse user input for potential tool usage
if "{" in user_input and "}" in user_input:
# Extract tool name and arguments
tool_call = user_input.split("{")[1].split("}")[0]
tool_name, arguments = tool_call.split(":")
arguments = eval(arguments)
# Find the corresponding tool function
for tool in tools:
if tool._name_ == tool_name:
# Execute the tool with user arguments
tool_output = tool(arguments)
return tool_output
# User request doesn't involve a tool, respond normally
return f"I understand, but I can't use a tool for this request. {user_input}"
while True:
# Get user input
user_input = input("User: ")
# Process user request and generate response with Llama 2
response = model.generate(
input_ids=model.tokenizer.encode(prompt.format(user_input=user_input, list_of_available_tools="
* ".join([t._name_ for t in tools]))),
max_length=1024,
num_beams=5,
no_repeat_ngram_size=2,
early_stopping=True
)
# Extract and format the generated response
generated_text = model.tokenizer.decode(response[0]["generated_tokens"], skip_special_tokens=True)
tool_output = process_user_request(user_input)
final_response = generated_text.replace("{generated_response}", tool_output)
# Print the final response to the user
print(final_response)
there is something called binding of func which i could not understand shit which could solve the problem using the langchain.agents func - create_tool_calling_agent
I decided not to use retriever tool so,
tools=[wiki,arxiv]
the following error occured
TypeError: type 'Result' is not subscriptable
can you provide the whole code i can help you if you want
this is work for me and i have created its ui also its working fine please send code or more about it ...we will help you
Is block chain is good career to start in 2024 and it's future scope
sir,can you do the embedchain tutorials
🙏🙂👍
Please dont use any paid api key ...we can't excess it
Code explanation is not good, each class ans component should be explained properly, it is very confusing, you are just writing the code that you have been working with.
with due respect....you are little faster here...don't know why you are in hurry these days...please be a little slower..may be of 80% speed of your current speed...thanks..
Did anyone faced below "orjson.orjson" module error.
error:ModuleNotFoundError: No module named 'orjson.orjson'
its coming after running "from langchain_community.tools import WikipediaQueryRun" code.
this error is coming with python 3.12 , but it worked with lower version ie 3.10