Ollama and Semantic Kernel with C#

Поділитися
Вставка
  • Опубліковано 28 січ 2025

КОМЕНТАРІ • 19

  • @mrsajjad30
    @mrsajjad30 10 днів тому

    This is gold. Using Semantic Kernel with ollama sharp. Thank you so much and of course I am very much interested in more of this stuff.

  • @lovol2
    @lovol2 Місяць тому +1

    fantastic, don't see much about LLMs and C#, keep going.

  • @arkord76
    @arkord76 4 місяці тому +1

    Thank you very much! This example was very helpfull for me to create an own Ollama connector. Just FYI: the OllamaSharp lib is now in version 3.0.1 and has some breaking changes, so the code in the blog post don't quite work. Besides that an exelent example, on point!

    • @antosubash
      @antosubash  4 місяці тому

      @arkord76 Thanks for letting me know. I have updated the blog post.

  • @rodneyperello
    @rodneyperello Місяць тому

    Great tutorial. Can you show us how to do a RAG using the packages you used.

  • @jagadeeshs2026
    @jagadeeshs2026 4 місяці тому

    we are using single layer application when the tester was tested the api in jmeter he used 1000 user at a time my databse is crashed and now i cant get the permission from databse ,some time it will fetch the permission some time not what should i do for this please help me

  • @jagadeeshs2026
    @jagadeeshs2026 4 місяці тому

    Can we make a shared dbmigrator for single layer application

  • @Ajmal_Yazdani
    @Ajmal_Yazdani 5 місяців тому

    Hi @Anto Subash, Thanks for share. I am running the model in my windows PC (CPU) with 32GB. I am getting below error and seems it's very slow :(. Unhandled exception. System.Threading.Tasks.TaskCanceledException: The request was canceled due to the configured HttpClient.Timeout of 100 seconds elapsing.

    • @antosubash
      @antosubash  5 місяців тому +1

      what model are you running?. make sure ollama is running ok with that model.

    • @Ajmal_Yazdani
      @Ajmal_Yazdani 5 місяців тому

      @@antosubash same model as you mentioned and also ollama is running

  • @afsalc9674
    @afsalc9674 3 місяці тому

    does llama can read pdf file?

    • @antosubash
      @antosubash  3 місяці тому

      yes it can. you have to create embeddings.

    • @afsalc9674
      @afsalc9674 3 місяці тому

      @@antosubash how? do a videa?

  • @Ajmal_Yazdani
    @Ajmal_Yazdani 5 місяців тому

    also is this model capable to doing RAG, embeddings, etc, plus what about function calling?

    • @antosubash
      @antosubash  5 місяців тому +2

      function will work with the connectors. wait for the ollama connector to be released.

    • @nitrovent
      @nitrovent 5 місяців тому

      I got function calling to quite consistently work with llama3.1 and the IChatCompletionService. It required to lower the temperature to .1 and to provide a system prompt telling the assistant that it actually uses plugins. Edit: I added the model via AddOpenAIChatCompletion()

  • @diegogarber1
    @diegogarber1 Місяць тому

    Fix your thumb :)

  • @jibrinmasud
    @jibrinmasud 2 місяці тому

    This is not a tutorial you’re just copying and pasting code.