Thank you very much! This example was very helpfull for me to create an own Ollama connector. Just FYI: the OllamaSharp lib is now in version 3.0.1 and has some breaking changes, so the code in the blog post don't quite work. Besides that an exelent example, on point!
we are using single layer application when the tester was tested the api in jmeter he used 1000 user at a time my databse is crashed and now i cant get the permission from databse ,some time it will fetch the permission some time not what should i do for this please help me
Hi @Anto Subash, Thanks for share. I am running the model in my windows PC (CPU) with 32GB. I am getting below error and seems it's very slow :(. Unhandled exception. System.Threading.Tasks.TaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing.
I got function calling to quite consistently work with llama3.1 and the IChatCompletionService. It required to lower the temperature to .1 and to provide a system prompt telling the assistant that it actually uses plugins. Edit: I added the model via AddOpenAIChatCompletion()
This is gold. Using Semantic Kernel with ollama sharp. Thank you so much and of course I am very much interested in more of this stuff.
fantastic, don't see much about LLMs and C#, keep going.
Thank you very much! This example was very helpfull for me to create an own Ollama connector. Just FYI: the OllamaSharp lib is now in version 3.0.1 and has some breaking changes, so the code in the blog post don't quite work. Besides that an exelent example, on point!
@arkord76 Thanks for letting me know. I have updated the blog post.
Great tutorial. Can you show us how to do a RAG using the packages you used.
we are using single layer application when the tester was tested the api in jmeter he used 1000 user at a time my databse is crashed and now i cant get the permission from databse ,some time it will fetch the permission some time not what should i do for this please help me
Can we make a shared dbmigrator for single layer application
Hi @Anto Subash, Thanks for share. I am running the model in my windows PC (CPU) with 32GB. I am getting below error and seems it's very slow :(. Unhandled exception. System.Threading.Tasks.TaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing.
what model are you running?. make sure ollama is running ok with that model.
@@antosubash same model as you mentioned and also ollama is running
does llama can read pdf file?
yes it can. you have to create embeddings.
@@antosubash how? do a videa?
also is this model capable to doing RAG, embeddings, etc, plus what about function calling?
function will work with the connectors. wait for the ollama connector to be released.
I got function calling to quite consistently work with llama3.1 and the IChatCompletionService. It required to lower the temperature to .1 and to provide a system prompt telling the assistant that it actually uses plugins. Edit: I added the model via AddOpenAIChatCompletion()
Fix your thumb :)
This is not a tutorial you’re just copying and pasting code.