💯 FREE Local LLM - AI Agents With CrewAI And Ollama Easy Tutorial 👆

Поділитися
Вставка
  • Опубліковано 19 вер 2024

КОМЕНТАРІ • 20

  • @analyticsCamp
    @analyticsCamp  3 місяці тому +1

    Thanks for all your helpful comments :) Here's a related video explaining AI agentic workflow: ua-cam.com/video/lA3Tju4VUho/v-deo.html

  • @peralser
    @peralser 2 місяці тому +1

    Excelente Video!! Thanks!

  • @optiondrone5468
    @optiondrone5468 3 місяці тому +1

    Thanks for sharing your thoughts and practical AI agent workflow. I also believe that this agentic workflow will fuel many LLM based development in 2024

    • @analyticsCamp
      @analyticsCamp  3 місяці тому +1

      Thanks for watching :) If you have a specific LLM-based development/project in mind please let me know. With such easy agentic access, I am also surprised how many AI users are still hooked on zero-shot with paid interfaces!

    • @optiondrone5468
      @optiondrone5468 3 місяці тому

      @@analyticsCamp ha ha it also never made sense to me why ppl don't look into open source LLM 🤔 its free, not limiting your token size, free to experiment with different models and most importantly your data (prompt) is yours and don't become automatically OpenAi's property. Keep up the good work, looking forward to your next video.

  • @Nathan-pu9um
    @Nathan-pu9um 8 днів тому +1

    Using tools like n8n low code you can do this alot easier

    • @analyticsCamp
      @analyticsCamp  8 днів тому

      I agree, but for deployment and wider use there is pricing for n8n, which could be beyond some users' budget (unlike CrewAI which can work with local LLMs free!). But thanks for watching :)

    • @Nathan-pu9um
      @Nathan-pu9um 8 днів тому

      @@analyticsCamp I agree but, you can use n8n to create workflows connected to Pinecone or a vector database so you can make your own agentic custom workflow internally

  • @jarad4621
    @jarad4621 3 місяці тому

    Also never seen the mistral idea so this model would make a really good agent then better then the others? Really helpful to know, glad I found this. Also can you test agencu dwarm ans let us know what the best agent framewoek is currently? Apparently crew is not great for production?

    • @analyticsCamp
      @analyticsCamp  3 місяці тому +1

      Thanks for watching :) I have tested multiple models from Ollama and mistral seems to have better performance overall, across most tasks. Agent Swarm can be useful for VERY specialised tasks in which general LLMs get it totally wrong. Other than that, it will add to the time/cost of build.But I'm not sure if I understood your question right!

  • @jarad4621
    @jarad4621 3 місяці тому +1

    Sorry another quesion, am i able to use LM studio with crewai as well, wanted to test some other models and its gpu accel allows models to run better then ollama for me, is it still going to have probems due to the issues you fix with the models file or is that issue not a problem for other local servers? Or is ollama the best way because you can actually edit those things to make it work well? Thanks

    • @analyticsCamp
      @analyticsCamp  3 місяці тому +1

      I do not use LM Studio so I cannot comment on that. But Ollama via terminal is pretty sturdy, CrewAI it should work with all Ollama models, but I have not tested all. If you run into issues you can report it here so others can know and/or help you :)

  • @jarad4621
    @jarad4621 3 місяці тому

    Awesome I've been looking for some of this info for ages, Best video on agents after watching dozens of vids, nobody explains the problems with other models or fixing model file and how to make sure the local models work, many of these YT Experts are just using local and other nodels snd wondering why it's not working well.
    Can i use phi 3 mini local as well and it needs same model setup? Also will llama 70b on openrouter api actually work as a good agent or does something need to be tweaked first nobody can answer these things, please help? Thanks!

    • @analyticsCamp
      @analyticsCamp  3 місяці тому

      Sure, you can essentially use any models listed in Ollama as long as you make a model file, you can manage the temperature etc. I have used LLAMA 70b before but surprisingly, it did not show better response than its 7b and 13b on most tasks! I recommend LLAMA3 (I may be able to make a video on it if I get some free time, LOL ).

    • @jarad4621
      @jarad4621 3 місяці тому

      @@analyticsCamp Awesome thanks ill test the smaller ones first then

  • @arielle-cheriepaterson7851
    @arielle-cheriepaterson7851 Місяць тому +1

    Are you available for consulting?

    • @analyticsCamp
      @analyticsCamp  Місяць тому

      Hi, could you please send me an email with more details? (my email address is in my channel's About section. Thanks :)

  • @EarthrightCanvas
    @EarthrightCanvas Місяць тому

    Cant follow.

    • @analyticsCamp
      @analyticsCamp  Місяць тому

      Hi, the code and process is on my GitHub (link in the description box) so you can follow at your own pace :)