Build Finance RAG Locally with DeepSeek R1-1.5B without GPU
Вставка
- Опубліковано 6 лют 2025
- Learn how to build a simple AI-powered question-answering system (RAG) using Python for Financial Datasets!
In this step-by-step tutorial, we’ll show you how to convert PDFs to text, split markdown content, create embeddings with Ollama and DeepSeek R1, and set up a RAG (Retrieval-Augmented Generation) chain for smart document processing.
Perfect for beginners in AI, machine learning, and natural language processing (NLP). Discover how to use LangChain, FAISS, and Ollama DeepSeek R1 to create your own AI assistant for answering questions from documents.
Watch now to learn Python AI programming, document embedding, and RAG implementation for smarter data handling!
#AI #Python #MachineLearning #NLP #LangChain #Ollama #DeepSeekR1 #RAG #DocumentProcessing #AITutorial #PythonProgramming #DeepSeek #OllamaDeepSeek #DeepSeekLocal #PythonOllama #AIChatbot #DeepSeekR1Locally #HowToUseOllama #RunDeepSeekLocally
Links
Dataset
github.com/lax...
Code Files
github.com/lax...
Local LLM Playlist
• Build Your Own Chatbot...
Ollama Setup for DeepSeek R1
• Run DeepSeek R1 Models...
👉🏻 Full LangChain Course
Master Langchain and Ollama - Chatbot, RAG and Agents
www.udemy.com/...
Master LangGraph and LangChain with Ollama- Agentic RAG
www.udemy.com/...
🔊 Watch till last for a detailed description
💯 Read Full Blog with Code
kgptalkie.com
💬 Leave your comments and doubts in the comment section
📌 Save this channel and video for watch later
👍 Like this video to show your support and love ❤️
~~~~~~~~
🆓 Watch My Top Free Data Science Videos
👉🏻 Python for Data Scientist
bit.ly/3dETtFb
👉🏻 Machine Learning for Beginners
bit.ly/2WOVh7N
👉🏻 Feature Selection in Machine Learning
bit.ly/2YW6ZQH
👉🏻 Text Preprocessing and Mining for NLP
bit.ly/31sYMUN
👉🏻 Natural Language Processing (NLP)
Tutorials bit.ly/3dF1cTL
👉🏻 Deep Learning with TensorFlow 2.0
and Keras bit.ly/3dFl09G
👉🏻 COVID 19 Data Analysis and Visualization
Masterclass bit.ly/31vNC1U
👉🏻 Machine Learning Model Deployment Using
Flask at AWS bit.ly/3b1svaD
👉🏻 Make Your Own Automated Email Marketing
Software in Python bit.ly/2QqLaDy
***********
🤝 BE MY FRIEND
🌍 Check Out ML Blogs: kgptalkie.com
🐦Add me on Twitter: / laxmimerit
📄 Follow me on GitHub: github.com/lax...
📕 Add me on Facebook: / kgptalkie
💼 Add me on LinkedIn: / laxmimerit
👉🏻 Complete Udemy Courses: bit.ly/32taBK2
⚡ Check out my Recent Videos: bit.ly/3ldnbWm
🔔 Subscribe me for Free Videos: bit.ly/34wN6T6
🤑 Get in touch for Promotion: info@kgptalkie.com
✍️🏆🏅🎁🎊🎉✌️👌⭐⭐⭐⭐⭐
ENROLL in My Highest Rated Udemy Courses
to 🔑 Crack Data Science Interviews and Jobs
🏅🎁 Python for Machine Learning: A Step-by-Step Guide | Udemy
Course Link: bit.ly/ml-ds-p...
🎁🎊 Deep Learning for Beginners with Python
Course Link: bit.ly/dl-with...
📚 📗 Natural Language Processing ML Model Deployment at AWS
Course Link: bit.ly/bert_nlp
📊 📈 Data Visualization in Python Masterclass: Beginners to Pro
Course Link: bit.ly/udemy95...
📘 📙 Natural Language Processing (NLP) in Python for Beginners
Course Link: bit.ly/intro_nlp
🎉✌️ Advanced Natural Language and Image Processing Projects | Udemy
Course Link: bit.ly/kgptalk...
📈 📘 Python for Linear Regression in Machine Learning
Course Link: bit.ly/regress...
📙📊 R 4.0 Programming for Data Science || Beginners to Pro
Course Link: bit.ly/r4-ml
✍️🏆 Introduction to Spacy 3 for Natural Language Processing
Course Link: bit.ly/spacy-i...
Thank You. Somewhat 'dated' but still useful !
lol it's funny how we are calling a tutorial from last week "dated"
You are my goat
Have you experienced working on edge LLM I would like to ask you something?
What a great video! you explanation can be understand easily and I have a question about RAG application, do you think using different LLMs have an impact to the quality of output or not?
Yes, output quality is affected. Suppose that you have very complex input In your RAG. Then in that case, You need high quality LLM.
Thank you for the amazing content. Just a question can we give a memory to the chatbot so that if we want to ask a different
question based on the previous answer of the chatbot?
Hi,
Yes. Previous video in this Local LLM series is about this only.
The retriever is not able to provide the proper context for the LLMS can you suggest some different methods so the llm can get the proper context from the retriever?
The retriever is not able to provide the proper docs while asking a question so the local chatbot is also giving wrong answers to the question?Please can you suggest some solution that will work on this properly? I have tried changing the search_type still not able to retrieve the proper docs for the question
For me, on mbp m1 16gb ram, Jupyter Kernel crashes when loading pdf using doclink's DocumentConverter.
can you tell why it fails with error:
The Kernel crashed while executing code in the current cell or a previous cell.
Logs: Disposing session as kernel process died ExitCode: undefined, Reason:
No reason is given
Please help