I am using these series to learn langchain and deploy it in our fortune 10 company.. One of the key challenges in knowledge retrieval for all of the engineers in our team. I ran this scenario and we may have a savings of $500K/per yr. Please keep doing what you are doing.
Error initializing FAISS index: Error raised by inference endpoint: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')) can you solve this error?
Reply just for fun, take it like a joke If you support with 1% of income Than max you will support 50 people in life ie 50% of your income And ull go starving soon 🤣 As other 50% goes a taxes to govt, by direct or indirect means 🤣
Can't wait till Groq introduces option for running almost any HuggingFace models! Never came across such a simple, fast and cost effective endpoint setup!
Hi Krish, Thanks a lot for such an amazing content. You are doing a great help to humanity by publishing such an amazing content available free for everyone.
Error initializing FAISS index: Error raised by inference endpoint: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it')) i m getting this error why?
Sir, the latest advancements in AI and data science trigger anxiousness since I am in early stages of Data science of roadmap. Will I ever catch-up with the latest technology or the basics remain relevant still
Hi Krish, I just tried replacing the WebBaseLoader to PyPDFLoader somehow the same code piece does not seem to be working. Any suggestions to how to replace the WebBaseLoader with PyPDFLoader?
Hi dear friend . Thank you for your efforts . How to use this tutorial in PDFs at other language (for example Persian ) What will the subject ? I made many efforts and tested different models, but the results in asking questions about pdfs are not good and accurate! Thank you for the explanation
llm=ChatGroq(groq_api_key=groq_api_key, model_name="mixtral-8x7b-32768") I initialized the mixtral but i 'am getting the following error. ValueError: Error raised by inference API HTTP code: 404, {"error":"model 'llama2' not found, try pulling it first"}
Hi @krishnaik06 pls make a video on how to connect it with cloud database like pinecone or astra db. After version update it's not working with langchain.
Sir want to build chatbot for Fintuned llama2 model in my local system Is it possible to build with streamlit like ur projects Please give us solution for this 😊😊
out of all the ollama models: Llama 3 8B 4.7GB ollama run llama3 Llama 3 70B 40GB ollama run llama3:70b Phi-3 3.8B 2.3GB ollama run phi3 Mistral 7B 4.1GB ollama run mistral Neural Chat 7B 4.1GB ollama run neural-chat Starling 7B 4.1GB ollama run starling-lm Code Llama 7B 3.8GB ollama run codellama Llama 2 Uncensored 7B 3.8GB ollama run llama2-uncensored LLaVA 7B 4.5GB ollama run llava Gemma 2B 1.4GB ollama run gemma:2b Gemma 7B 4.8GB ollama run gemma:7b Solar 10.7B 6.1GB ollama run solar on a 16 GB RAM 'ok' desktop, Phi-3, Gemma-2b seems to run in reasonable time .... but I could not get Groq to work with either. Please advise what changes are needed for this part of the series. As of now, wait time is too long. Thanks
what is the purpose of using open source LLMs with paid groq api. just use directly openAi api key the only purpose of free LLMs are the ability to run them locally fro free
Support me by joining membership so that I can upload these kind of videos
ua-cam.com/channels/NU_lfiiWBdtULKOw6X0Dig.htmljoin
make a patreon
Just became member of this amazing channel.
I am using these series to learn langchain and deploy it in our fortune 10 company.. One of the key challenges in knowledge retrieval for all of the engineers in our team. I ran this scenario and we may have a savings of $500K/per yr. Please keep doing what you are doing.
Error initializing FAISS index: Error raised by inference endpoint: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
can you solve this error?
Consider supporting the creator with 1% of the saving
Reply just for fun, take it like a joke
If you support with 1% of income
Than max you will support 50 people in life ie 50% of your income
And ull go starving soon 🤣
As other 50% goes a taxes to govt, by direct or indirect means 🤣
On a serious note.
krish sir, is doing a great job for all of us we should support him in the ways possible
Best langchain series by best teacher. Krish sir ❤
Can't wait till Groq introduces option for running almost any HuggingFace models! Never came across such a simple, fast and cost effective endpoint setup!
Krish, you are doing an excellent job! Great job, bro!
Hi Krish, Thanks a lot for such an amazing content. You are doing a great help to humanity by publishing such an amazing content available free for everyone.
Very helpful set of videos. And yes, when I type words that have 'ng' in them - the 'g' almost always come before 'n'.
Great work, but make a video on implementing RAG on complex pdf (like extracting financial statements tables)
Nice video bro, thanks a lot!
Can you teach us how to build with GROK (from X)?
Great work sir, Like from Pakistan
Error initializing FAISS index: Error raised by inference endpoint: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError(': Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
i m getting this error why?
i got the same error check your .env file paths of keys it is not given properly
@@aiml.meetsolanki I got the same error! can you explain the solution.
@@samagrashrivastava8042problem with api key initialisation. It is not given properly.initialise it properly
Great work!!
Hey Krish I hope you are doing well, can you make a video about full stack data scientist AKA Unicorn data scientist role?
Sir, the latest advancements in AI and data science trigger anxiousness since I am in early stages of Data science of roadmap. Will I ever catch-up with the latest technology or the basics remain relevant still
yes, tools will always change, domain knowledge will remain, enjoy the anxiousness.
Hi Krish, I just tried replacing the WebBaseLoader to PyPDFLoader somehow the same code piece does not seem to be working. Any suggestions to how to replace the WebBaseLoader with PyPDFLoader?
Will it be possible to process the vector embeddings before hand and allow the app to run faster in terms of reply?
This seriese a gem 💎
Thanks!
Is Groq LPU hardware needed to see the actual inference performance boost?
Sir you are looking handsome
You are the best, dude!
Hi.. I am interested in predictive analytics where I mainly use ML models.. How will LLM be useful in this field?
instead of chains, can we use agent here?
Hi dear friend .
Thank you for your efforts .
How to use this tutorial in PDFs at other language (for example Persian )
What will the subject ?
I made many efforts and tested different models, but the results in asking questions about pdfs are not good and accurate!
Thank you for the explanation
SIr I'm getting some error
ValueError: Error raised by inference API HTTP code: 404, {"error":"model 'llama2' not found, try pulling it first"}
Were you able to figure this out?
Solution?
install llama2 by "ollama run llama2"
i am getting same error in every project from this playlist. do you get the solution??
@@aviratsharma771 install Ollama in your pc then run "Ollama run llama2" in Command Prompt
llm=ChatGroq(groq_api_key=groq_api_key,
model_name="mixtral-8x7b-32768")
I initialized the mixtral but i 'am getting the following error.
ValueError: Error raised by inference API HTTP code: 404, {"error":"model 'llama2' not found, try pulling it first"}
install llama2 by "ollama run llama2"
we used sessions, but the process looks similare to last streamlit app we created nothing different you said you will explain it
Hi @krishnaik06 pls make a video on how to connect it with cloud database like pinecone or astra db. After version update it's not working with langchain.
Nice 🙂
how to develop a plagrism checker?
Sir want to build chatbot for Fintuned llama2 model in my local system
Is it possible to build with streamlit like ur projects
Please give us solution for this 😊😊
I swearrrrr same issues with me too i press c before g and it always turns out to be lancghain
Please build a conversational chain system using open source llms
out of all the ollama models:
Llama 3 8B 4.7GB ollama run llama3
Llama 3 70B 40GB ollama run llama3:70b
Phi-3 3.8B 2.3GB ollama run phi3
Mistral 7B 4.1GB ollama run mistral
Neural Chat 7B 4.1GB ollama run neural-chat
Starling 7B 4.1GB ollama run starling-lm
Code Llama 7B 3.8GB ollama run codellama
Llama 2 Uncensored 7B 3.8GB ollama run llama2-uncensored
LLaVA 7B 4.5GB ollama run llava
Gemma 2B 1.4GB ollama run gemma:2b
Gemma 7B 4.8GB ollama run gemma:7b
Solar 10.7B 6.1GB ollama run solar
on a 16 GB RAM 'ok' desktop, Phi-3, Gemma-2b seems to run in reasonable time .... but I could not get Groq to work with either. Please advise what changes are needed for this part of the series. As of now, wait time is too long.
Thanks
what is the purpose of using open source LLMs with paid groq api.
just use directly openAi api key
the only purpose of free LLMs are the ability to run them locally fro free
awesomeee ;)
Open source is 🫶🫶🫶