100% LOCAL AI Agents with CrewAI and Ollama

Поділитися
Вставка
  • Опубліковано 24 лис 2024

КОМЕНТАРІ • 24

  • @jafarekrami
    @jafarekrami 28 днів тому +6

    from crewai import Task, Crew, Process, Agent, LLM
    # Initialize the local LLAMA 3 model via Ollama
    llm =LLM(model="ollama/llama3.1:8b")

    • @NelitoCalixto
      @NelitoCalixto 15 днів тому

      this worked for me: llm = LLM(model="ollama/phi3.5:3.8b", base_url="localhost:11434")

  • @apcwss
    @apcwss 2 місяці тому +1

    Thank you so much. I was having an issue on conection creai wuth llama, but your template helped me.

  • @jarrod752
    @jarrod752 3 місяці тому +3

    _Paid Nothing..._
    Electricity: _Am I a joke to you?!_

  • @DihelsonMendonca
    @DihelsonMendonca 4 місяці тому +4

    Hello, please make a review about Open WebUI. It's the great hype currently. "Open WebUI", is a frontend for LLMs, which offers the ability for users to talk hands free, with excellent voices, using internal or external TTS APIs from Eleven Labs, Groq, etc, has web search, RAG, long term memory, it's compatible with OpenAI API, can import GGUF models directly from Hugging Face and convert for using on Ollama, can fine tune models for special needs, can work with multimodal models, image and video, can engage with multiple models simultaneously, local and external at the same time, can accept new plugins, external tools and functions... And it's also open source. 🎉❤

  • @DihelsonMendonca
    @DihelsonMendonca 4 місяці тому +1

    Now we can with GPT 4o mini. The prices are 90% cheaper ! ❤

  • @NateGinn-u9m
    @NateGinn-u9m 4 місяці тому +2

    now do this with groq

  • @none-hr6zh
    @none-hr6zh Місяць тому

    Thank you so much.Why there is need of api if I using local llms .Suppose I want to see the tokenization and detokenization or any model related information,how to see that model file .I am using ollama pull llama and ghen using in crewai but reposne coming is from api . Please give hint how to see the local llms file

    • @TylerReedAI
      @TylerReedAI  Місяць тому +1

      Hmm your response is still from OpenAI? So if you set your OpenAI api key to like sk-1111, does it still work? You don’t need an api key, but you just need a placeholder, so like any string really.

    • @none-hr6zh
      @none-hr6zh Місяць тому

      @@TylerReedAI Thank you for the reply.My question is that how to go to the details of local llms like(How it is tokenizing ,encoding decoding) .If I have to change iany method inside model I must have access to model file.I am using ollama(model="llama",base url), it is using rest api to send data to llms and recieve response from llms .I am not able to find how does message is preprocessed and encoded befor giving to llms .

    • @none-hr6zh
      @none-hr6zh Місяць тому

      I have one doubt
      from langchain.from langchain.llms import Ollama
      llm=Ollama(model='llama3')
      The response going through rest api using post method , can we acess server side code as it is local host , I want to see the internal working how my input is going to lma model and all the internal detail of model in .py itself

  • @JNET_Reloaded
    @JNET_Reloaded 2 місяці тому +1

    Nice

  • @m.c.4458
    @m.c.4458 Місяць тому

    I only use Ollama now, and I dont use Crewai but moa, and make logic for it. I am done with frameworks .. too many API and bias data.

  • @ZaneLing-t3m
    @ZaneLing-t3m 21 день тому

    if i want to use my ollama on my server not loadl, how can i change my code?JUST change the url to my server ollama url make no sense.

    • @TylerReedAI
      @TylerReedAI  21 день тому

      So you have a separate server with ollama? I guess if you have a server running, maybe create an API that you can call. With ollama running on your computer yeah you would just change the base url. It seems you have a different setup, so how would you normally connect to your own server?

  • @mobilesales4696
    @mobilesales4696 3 місяці тому

    Can you write a script in which we can add unlimited amount of ai which use api system and seprate functions in which we can store our offline llms like ollama all version or any ai system and store that offline llms to github so we can use them to our desire place by running simple script 😅😊

  • @ade7456
    @ade7456 3 місяці тому

    Great! Can I create multiple agents? E.g One to code, one to test, one to correct code etc?

  • @themax2go
    @themax2go 3 місяці тому

    would you still recommend this approach - crewai - over autogenstudio (ua-cam.com/video/IjqAMWUI0r8/v-deo.html) ?

  • @darleisonrodrigues3365
    @darleisonrodrigues3365 4 місяці тому +1

    🇧🇷🇧🇷🇧🇷👏👏👏