Using Local Large Language Models in Semantic Kernel
Вставка
- Опубліковано 7 сер 2024
- Did you know that you can download large-language models on your local machine to build Semantic Kernel agents instead of having to use Azure OpenAI or the OpenAI API? In this video, I show you how you can download LLMs and SLMs on your local machine via Ollama and LM Studio, and use models in your Semantic Kernel applications!
0:00 Introduction
0:38 Option 1 - Ollama
2:16 Interacting with Ollama models via the terminal
3:48 Using Ollama models in Semantic Kernel applications
7:40 Interacting with Ollama models via Semantic Kernel
8:33 Option 2 - LM Studio
9:14 Interacting with LM Studio models in the app
9:50 Running our LM Studio models via localhost
11:39 Using LM Studio models in Semantic Kernel
13:15 Wrap up
Useful links
LM Studio: lmstudio.ai/
Ollama: ollama.com/
Connect with me!
Twitter: / willvelida
GitHub: github.com/willvelida
Bluesky: bsky.app/profile/willvelida.b... - Наука та технологія
Would love to see how this works as well with the Ollama Embeddings API + Semantic Kernel Memory. ❤
do you think performance wise that ollama is better than lm studio ? lm studio, while has a nice ui looks like its further away from the metal.
what hardware did you use?
Curious to hear if anyone has been able to get local models working with automatic function calling
Same will like to see function calling with local LLMs
Ayyy it's Cecil from Microsoft. Didn't expect you here. What a small world.
@@vivekkaushik9508 Big fan of the channel. Also left Microsoft a while ago 🙂
@@CecilPhillip 😲 Sorry, I didn't know. I hope everything is well.
@@vivekkaushik9508 nothing to be sorry about. It's all good. Still a supporter of a lot of the work going on there