Thank you so much, madam. You really considered my comment regarding RAG implementation without any secret key. Thank you so much again.. keep posting and keep growing !! I will definitely share this video with all my Network. Happy coding !
Thanks Aarohi Mam for your valuable video. Can I change prompt response format in such a way that it fills the details in a fixed template:, like, filling tender fields from the requirements/specifications in a pdf file? Please guide further through the details!
I wanted to know about which configuration you are using?Because I am facing out of memory issue for this task. I have 1TB CPU, 2GB GPU.If I use CPU it takes more than 1hr to complete for just 1 question,on the other hand if I use GPU it is causing memory issue!!
hello mam i am stuck with output called Loading checkpoint shards its not loading only, its strck like that only, is it take much time to lode or any problem or what i have to do
It depends upon the type of task you are performing and domain of the data. If you are performing general text processing or you are unsure about the specific domain. Use general-purpose models like all-MiniLM-L6-v2 or all-mpnet-base-v2. These are fast, lightweight, and work well for a broad range of NLP tasks. If your data is specific to a certain domain (e.g., legal, scientific, financial, etc.), select a model fine-tuned for that domain. Legal texts: Use models like nlpaueb/legal-bert-base-uncased. Scientific texts: Use models like allenai/scibert_scivocab_uncased. Financial texts: Use sentence-transformers/finbert.
Mam please create an intelligent chatbot using Streamlit and Langchain (RAG), where the chatbot can receive voice input in Urdu/Hindi, process it, and return both text and audio responses in Urdu/hindi. The chatbot should be able to interact with users fluently, allowing for seamless audio-to-text and text-to-audio communication in the Urdu/Hindi Workflow: ● Build the Streamlit interface for real-time Urdu/hindi audio input and output. ● Integrate Langchain (RAG) with an LLM (Language Model) API to generate dynamic responses based on the user’s input. (use PDF files only) ● Ensure the chatbot responds not only with a text-based answer in Urdu/Hindi but also converts that response back to audio and plays it for the user language.
iam not able to laod the huggingfaceembeddings. it shows this error. The specified module could not be found. Error loading "C:\Users\aj441\anaconda3\envs\llmenv\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.
I also faced the same issue. Intsead of using the requirements.txt file, just install them directly using these commands: 1. conda create -n env_langchain2 python=3.10 2. conda activate env_langchain2 3. conda install pytorch torchvision torchaudio cpuonly -c pytorch 4. pip install transformers 5. pip install sentence-transformers 6. pip install langchain langchain_community langchain-huggingface langchain_experimental langchain_chroma langchainhub 7. pip install streamlit 8. conda install jupyter 9. jupyter notebook Then test your installation by running this script in Jupyter Notebook: import torch import transformers import sentence_transformers import langchain print("PyTorch version:", torch.__version__) print("Transformers version:", transformers.__version__) print("Sentence Transformers version:", sentence_transformers.__version__) print("LangChain version:", langchain.__version__) It worked for me! Let me know if you still face issues.
Thanks a lot, Madam !!!. You are awesome at Explaining things in a very calm and simple way, whereas some UA-camrs exaggerate. :)
Thank you so much, madam. You really considered my comment regarding RAG implementation without any secret key. Thank you so much again.. keep posting and keep growing !! I will definitely share this video with all my Network. Happy coding !
You are most welcome 🙂
I hope there will be more basic concept courses and practical courses. I like the way you tell you very much, and your teaching is very vivid
Thanks for the feedback! I'm planning on making more content like this.
Very Helpful
keep it up mam !!
Thanks a lot
Very helpful for me
Glad it helped
very nice
Thanks! I'm glad you liked it.
Good work
Thank you so much 😀
I have only 1 like option. Again and again try to like these videos. Really helpful.
Glad my videos helped you 🙂
You are the best
@@GradientPlayz Thank you
If I have the option to subscribe 1M times, I will do so. But for one ID, there is only one subscription. U r awesome !!!!
Very nice and thank you very much.
please help with training models ways or examples
Thanks.
You're welcome
I wish you could let us know how to get a clean or refined final answer
Thanks Aarohi Mam for your valuable video. Can I change prompt response format in such a way that it fills the details in a fixed template:, like, filling tender fields from the requirements/specifications in a pdf file? Please guide further through the details!
Mam can you make video on api callin or tool calling or function calling how to do .using local LLM longchain and rag dataset
Noted!
@CodeWithAarohi and please cover LangGraph and LangSmith too
@ Sure
post regular video about Generative AI - full course
I will try my best.
I wanted to know about which configuration you are using?Because I am facing out of memory issue for this task. I have 1TB CPU, 2GB GPU.If I use CPU it takes more than 1hr to complete for just 1 question,on the other hand if I use GPU it is causing memory issue!!
@@SumaiyaJahan-y4c My GPU vRAM is 24gb
Thank you very much for your effort.
I want to ask you if i can use colab or kaggle notebook instead of running the code in my local machine ?
Yes, you can
How can I convert my unstructured data into structured data?
post video about how to fine-tune "Claude 3.5 sonnet API" - full course video for developers..please
Noted!
hello mam i am stuck with output called Loading checkpoint shards its not loading only, its strck like that only, is it take much time to lode or any problem or what i have to do
please reply mam
How do we know which huggingface embedding I have to download? please somebody help me
It depends upon the type of task you are performing and domain of the data.
If you are performing general text processing or you are unsure about the specific domain. Use general-purpose models like all-MiniLM-L6-v2 or all-mpnet-base-v2. These are fast, lightweight, and work well for a broad range of NLP tasks.
If your data is specific to a certain domain (e.g., legal, scientific, financial, etc.), select a model fine-tuned for that domain.
Legal texts: Use models like nlpaueb/legal-bert-base-uncased.
Scientific texts: Use models like allenai/scibert_scivocab_uncased.
Financial texts: Use sentence-transformers/finbert.
@@CodeWithAarohi Thank you so much for your time.. please start taking classes where we can ask you our doubts..
please it said that i have to download the model and it took 9.5 GB , is it true ? or there is other method without downloading it
You need to download the pretrained model. You can try using some other LLM which is smaller as compare to this model.
@@CodeWithAarohi OK thank u so so so much 💗💗
Mam please create an intelligent chatbot using Streamlit and Langchain (RAG), where the
chatbot can receive voice input in Urdu/Hindi, process it, and return both text and audio responses in
Urdu/hindi. The chatbot should be able to interact with users fluently, allowing for seamless
audio-to-text and text-to-audio communication in the Urdu/Hindi
Workflow:
● Build the Streamlit interface for real-time Urdu/hindi audio input and output.
● Integrate Langchain (RAG) with an LLM (Language Model) API to generate dynamic responses
based on the user’s input. (use PDF files only)
● Ensure the chatbot responds not only with a text-based answer in Urdu/Hindi but also converts that
response back to audio and plays it for the user language.
Noted!
iam not able to laod the huggingfaceembeddings. it shows this error.
The specified module could not be found. Error loading "C:\Users\aj441\anaconda3\envs\llmenv\lib\site-packages\torch\lib\fbgemm.dll" or one of its dependencies.
@@amalkuttu8274 are you running through anaconda or cmd prompt
@@CodeWithAarohi anaconda
I also faced the same issue. Intsead of using the requirements.txt file, just install them directly using these commands:
1. conda create -n env_langchain2 python=3.10
2. conda activate env_langchain2
3. conda install pytorch torchvision torchaudio cpuonly -c pytorch
4. pip install transformers
5. pip install sentence-transformers
6. pip install langchain langchain_community langchain-huggingface langchain_experimental langchain_chroma langchainhub
7. pip install streamlit
8. conda install jupyter
9. jupyter notebook
Then test your installation by running this script in Jupyter Notebook:
import torch
import transformers
import sentence_transformers
import langchain
print("PyTorch version:", torch.__version__)
print("Transformers version:", transformers.__version__)
print("Sentence Transformers version:", sentence_transformers.__version__)
print("LangChain version:", langchain.__version__)
It worked for me! Let me know if you still face issues.
@@eashan2405 I will surely look that.
@@eashan2405 Really good one