How to RUN GEMMA with LANGCHAIN and OLLAMA Locally
Вставка
- Опубліковано 10 жов 2024
- In this video, I'll show you how to use Gemma with LangChain and Ollama. First, we'll take a look at Ollama. Next, we'll learn how to use an Ollama model with Langchain. Finally, we'll cover how to perform an Ollama Chat model.
00:01 Intro
00:50 Installing Ollama
02:34 LangChain & Ollama
04:31 Working with LLMs
06:00 Working with Chat Models
🔗 Notebook: github.com/Tir...
🚀 Medium: / tirendazacademy
🚀 X: x.com/tirendaz...
🚀 Linkedin: / tirendaz-academy
▶️ LangChain Tutorials:
• LangChain Tutorials
▶️ Generative AI Tutorials:
• Generative AI Tutorials
▶️ LLMs Tutorials:
• LLMs Tutorials
▶️ HuggingFace Tutorials:
• HuggingFace Tutorials ...
🔥 Thanks for watching. Don't forget to subscribe, like the video, and leave a comment.
#ai #gemma #generativeai
Thanks so much for this informative tutorial. What keyboard are you using ? It sounds very nice 😊
very informative friend thank you
You're welcome
Hi, I have an error while running ollama run gemma:2b --> Error: error loading model. TIA!
Hi, before loading model, you need to start ollama on your computer.