Getting Started with Ollama, Llama 3.1 and Spring AI
Вставка
- Опубліковано 24 вер 2024
- In this tutorial you will learn how to get started with Ollama. With Ollama you can run open-source Large Language Models (LLM) locally. We will be using the latest and greatest model from Meta, 3.1. Finally we will bring this back to Spring by showing you how you can use that model in your next Spring Application with Spring AI.
🔗Resources & Links mentioned in this video:
GitHub Repo: github.com/dan...
Spring Office Hours: www.springoffi...
👋🏻Connect with me:
Website: www.danvega.dev
Twitter: / therealdanvega
Github: github.com/dan...
LinkedIn: / danvega
Newsletter: www.danvega.de...
SUBSCRIBE TO MY CHANNEL: bit.ly/2re4GH0 ❤️
Thanks Dan! I am using ollama with open web ui for a while now, I can't wait to expirement with it on my projects!
Thank you for this! I've been wanting to learn spring ai ever since it came out. But I've never done it because of the pricing with open AI. this finally gets me working with it!
helllo. nice to meet you, i am from brazil.
very good knowledge!!
le meilleur professeur de spring, cette nouvelle revolution me fait du bien
Beatiful! Thank you.
Great video!
Great Video! Thanks! I think it would be great an example with RAG and/or Vision Models like shown in videos with Chat GPT.
Hi, @DanVega.
Thanks for video.
I'm wondering if there any implementation of ImageModel for ollama planned soon (e.g. like OpenAiImageModel)?
Hi Dan, how can we enable chat memory to make it function like a continuous chat conversation?
it really sounds like there is some potential to tie it into a localized data repository of information and have it be able to go and retrieve information for you.
That could maybe be One Note or Wiki or or Bugzilla and quickly find and pull some relevant information for you. Any sort of documentation. Like a localized search engine of your proprietary documents and methods. Pretty interesting.
Good video
For me function calling not working, i followed the same code
Hi Dan. And I'm trying to implement EmbeddingModel with PgVectorStore using LLAMA 3.1, but I'm getting an error that says OLLAMA has a fixed size of 4096 for embeddings, but PGVector only supports dimensions below 2000. Do you have any suggestions for work around this problem?
{"error":"model \"llama3.1\" not found, try pulling it first"}] with root cause, why?
Type this command in your terminal to pull the model and then retry:
ollama pull llama3.1