In this video Brandon mentions two other Crew AI videos he has created. I've taken the crash course video and it is about the best Crew AI video I have seen. There are other good ones on UA-cam, but you should not miss Brandon's videos if you are interested in Crew AI.
Thanks Edward! I seriously appreciate you saying that. I put in a lot of work to get these tutorials just right so that means a lot to me! If there is anything specific that'd you like to see, please let me know! I'm always open to suggestions! CrewAI can do so much so I want to crank out a lot more videos for you guys!
I asked and you delivered! I'm at a loss for words to describe you. Just know that you are one of the best in the biz. And from the other comments, you can see that your work is very much appreciated. Thank you.
^C^CTerminate batch job (Y/N)? n Environment variables loaded from .env Prisma schema loaded from prisma\schema.prisma Datasource "db": PostgreSQL database "crew_ai_visualizer", schema "public" at "localhost:5432" Error: P1000: Authentication failed against database server at `localhost`, the provided database credentials for `postgres` are not valid.
You are a great teacher. Very easy to follow and cover the topics thoroughly. So glad I found your channel ! Thanks for all your hard work and dedication! 👍⭐⭐⭐⭐⭐👍
Thank you for the detailed walk-trough. It took me the whole evening, two conda environments and gemini and finally chatgpt's help to set it up, but yay me. In the end, it worked, but damn, is it slow without cuda. I don't know which of my previous local llm experiments decided I don't need cuda in my life, so now I'm waiting for my major kde update to reinstall it. I think maybe the internet search won't work by default, maybe it will require some api for a search engine but as I said, it's slow.
Excellent! presented exactly as an educator would! I've been through many tutorials and all of them were too difficult to follow! You got it right by providing the workflow at the beginning as well as the programs needed! Great job!
Thanks man! I really appreciate you saying that! I always worry my videos are too long but there is just so much info that you have to know in order to use these technologies.
@@bhancock_ai videos can be long as long as there's an outline to follow and objectives to accomplish. I've been a teacher for 20 years and believe me your tutorials have been the best I've seen so far (for non techies like myself)
Thanks a ton for dropping this. I was literally working all afternoon on this very thing. That last part about which models support the different functionality was very applicable and time saver versus banging my desk. I had also tried using both a local LLM with LMStudio (which mirrors the environmental setup) for my agents and GPT-4 for the manager_llm. Couldn’t get that to work. I’ve been focused on making crews create a Python GUI for parsing videos as a PoC. I’d love to see you give it a whirl. Since we know that stand-alone chats are not great at full fledged more complex coding projects, I am trying to move past that hurdle by using a DevTeam Crew. This far, even with GPT-4, I’ve been unsuccessful and usually end up with just the example output I provide it.
Of course! I have a lot more CrewAI content in the works for you guys! If there is anything specific that you'd like me to add to the queue, please let me know!
Hey mate, first thank you for making this video! First video I have by you and I really like your teaching style. I went to your web site and filled out the form but didn't get an email. I'll keep an eye on it but wanted to give you a heads up that you might have a bug. Thanks again for your contributions to all of this, and keep up the great work :)
any reason why I am getting this error : "It seems we encountered an unexpected error while trying to use the tool. This was the error: can only concatenate str (not "dict") to str ". It seems like it is not reading the MarkdownTools file correctly
Many thanks for the fascinating guide and tips really appreciate it!! I have a question or ask for a favor if you make a guide how to deploy and use crow Ai with open source LLM for production environments?
Hi Brandon! Do you intend to create a video like others you've already made by creating a crewai application and deploying it on Vercel or another platform and generating an API or something like that to be used in production? That would help a lot.
Hey! That is the end goal! There is just so much foundational information that I want to cover first before making a full stack video like that. From what I've seen on UA-cam there isn't a course that covers how to use CrewAI in a production environment. Have you seen one yet?
@@bhancock_ai Nowhere and I look all the time and can't find anything about using it in production. It would be amazing if you did it and it would be a pioneer here on UA-cam in relation to this.
thanks , that's amazing I wonder if you can make a video to show to use crew ai as a part of an api, you trigger api to get crew ai to do it's magic preferably using flask to code the api thank you Brandon
Hey Youssef! You've read my mind! I plan on doing a video on this in the upcoming weeks. There are a few more foundational videos I want to do before making a tutorial like this.
Can you do an upated on for llama3? I tried updating the script to llama3 but didn't work, when I type from llama3 the IDE doesn't recognize llama3. Any help would be awesome
Hey Brandon, Excellet Tutorial, Although I have one doubt, why do we need to create the script file and ModelFile? Can't we directly set the model to "ollama/mistral" etc Awaiting for you reply, thank you for working hard for this tutorial :)
Hey! You could use a model file directly but your results wouldn’t turn out as well. It’s important to include the stop words that we set up in the Modelfile. Hope that clears things up!
Hi, my machine does not have higher RAM. Can we connect mistral model via API like we do in case of Open AI? If yes, can you share an example? Many Thanks. 🙏
Hey! I have used Pydantic when defining tools for the Crew to use. Are you asking about something else? Also, I briefly talk about how to use Pydantic with CrewAI in the first CrewAI crash course that I did a little bit ago. I'm not sure where I mentioned it in the video but here's the link: ua-cam.com/video/sPzc6hMg7So/v-deo.htmlfeature=shared
@@bhancock_ai I meant to use it for defining the response from the llm with instructor. some times, you enter infine loop when the response is not quite formmated. it happen more with open source llms.
so i see in visual studio you use python and for me only powershell is selectable even after installing python and adding to environment path. how can i add it so i can actually follow along what you are doing?
Your tutorials are excellent. How can I install Ollama in Linux HPC cluster? I found pip install Ollama works fine for me. I created an virtual environment and installed all packages. My question: How can I setup Ollama for local models in HPC cluster after installing Ollama through Pip install command. Another question- Can we use VLLM?
Great video. One issue I'm having with CrewAI against Local LLMs is having them properly call functions. Curious if you've run into this, and have any tips/videos. I was able to use LM Studio to serve up Mistral, and still have it wrapped in an OpenAI API ... but even with the OpenAI interface I'm still having sub-par results. (I'd prefer to use Ollama so I can spin it out to a cloud docker container with bigger resources.)
I'm need an adult! I am stuck at running the llama 2 model file.sh it says I dont have extension for debugging 'shell script'.. which extension do i need? Any advice?
Lol! You really did make me laugh with that, "I'm in need of an adult" 😂 I actually just launched a free Skool community and it's way easier to provide support in there. Inside of Skool, you'll see I created a post about this video. Feel free to add a comment with the issue you're having and maybe a screenshot, and we'll all be able to better help you out! www.skool.com/ai-developer-accelerator/about
Many Thanks for the amazing video!! I have a question though, Why don't we access these open source LLMs such as Mistral and so on using Huggingface Api instead of downloading the model locally? is there a specific reason?
Is it possible to use multiple Ollama servers or am I limited to one ENV variable? It would be neat to diversify models within a Crew, for example "command-r" for manager agent and "codellama" for codegen agent.
Firstly, great job on the video I am just learning how to program. And with autism it gets a bit brain addling ERROR "ModuleNotFoundError: No module named 'crewai' Does not matter what I do to install it "visual studio terminal" pip install crewai etc any thoughts?
@@salespusherdid you fix the error? if you are running vscode i found that if I click on the play button it does not work but if I click the down arrow beside it and run python file it works.
thanks for your tutorial regards to crew ai, it has helped me greatly. do you have any successful examples of using local Ollama openhermes and "from crewai_tools import CSVSearchTool". my crew is able to run, however during inserting of embedding into chromadb, it encountered "404 missing page" error.
First of all, I'd like to thank you for the tutorial! I followed the tutorial and tried to run with SerperDevTool and it is not working with neither Llama2 nor Mistral. Do you have any clue about that?
hey Brandon thanks for the video, its a mazing to see what your doing here with the channel everything is soo well put and extremely helpful. THank you, I have a question while following allong, there is a strange error on line 2 "Traceback (most recent call last): File "/Users/billie/Documents/GitHub/crew-ai-local-llm-main/crewai-advanced-example/main.py", line 1, in from crewai import Crew ModuleNotFoundError: No module named 'crewai'" basicly it saying it cant find the 'crewai', i have installed poetry, done everything step by step. couldnt wrap my head around am i missing a step Brandon? Thanks again for the information
🎯 Key Takeaways for quick navigation: 00:00 *🚀 Introdução ao vídeo e objetivo principal* - Aprender a executar o Crew AI gratuitamente usando o Olama - Executar LLMS localmente, como Llama 2 e Mistral - Conectar esses LLMS ao agente Crew AI para execução gratuita do Crew AI 00:55 *🛠️ Quatro tecnologias principais neste tutorial* - Olama: Ferramenta para modificar, baixar e executar LLMS localmente - Llama 2: Modelo de linguagem treinado pela Meta, com diferentes modelos e requisitos de RAM - Mistral: Modelo de linguagem grande, com desempenho notável em comparação ao Llama 2 - Crew AI: Framework para criar e gerenciar agentes de IA para resolver tarefas complexas 04:12 *🚀 Configuração do Olama para executar LLMS localmente* - Baixar e instalar Olama, movendo para a pasta de aplicativos - Configurar Olama no terminal, baixar e executar o modelo Llama 2 localmente - Configurar modelo Mistral de maneira semelhante, preparando os LLMS para uso com Crew AI 08:53 *⚙️ Configuração de modelos personalizados para Crew AI* - Criar arquivos de modelo para personalizar configurações específicas dos LLMS - Executar scripts para criar e configurar modelos personalizados do Llama 2 e Mistral para uso com Crew AI - Verificar a lista de modelos instalados usando o comando "Olama list" 11:00 *🚀 Exemplo prático: Conectar LLMS locais ao Crew AI* - Apresentação do exemplo usando um validador de Markdown com Crew AI - Executar o exemplo conectando o Llama 2 personalizado ao Crew AI - Exibir saída e feedback do exemplo de validação de Markdown usando o Llama 2 Made with HARPA AI
Hi! I can make it works with OpenAI, but once I try to run it with Ollama, it start to show me errors like: Action '' don't exist, these are the only available...... Do you have any idea about how to fix it?
@@bhancock_ai its a chat web interface on top. Chainlit works easily on top of langchain. But i haven't seen anyone do a tutorial on a web interface only terminal.
Your videos are very informative sir, but I am running a Windows laptop and it is difficult to follow even after downloading the files from your github I am unable to run the files properly I am facing so many errors
Good video. Is it possible to create two agents using the same model? So that I only have to download the model once but they are shaped differently due to the parameters of the Modelfile. If I make two modelfiles with the same model, does it download 2 times the same model or does it download once and both agents make requests to the same model?
Hey! I'm not 100% sure what you're asking. If you want to get more support with CrewAI and using local LLMs, I created a skool community for you guys to ask your questions and get more support: www.skool.com/ai-developer-accelerator/about
If you want to see a Crew build out a newsletter, you'll definitely want to check out this video here: ua-cam.com/video/Jl6BuoXcZPE/v-deo.html In that tutorial, I show you how to build an AI newsletter. All you need to do is change the topic in the code and you instantly have a crypto newsletter. Hope that helps!
I would check out the Ollama windows instructions on their site! I don’t have a windows machine but they looked pretty simple! The only gotcha is I think it’s still in beta
Hey this is unrelated to this video and instead focuses on a question about Crewai. Do you have any knowledge on this error: "Failed to convert text into a pydantic model due to the following error: Unexpected message with type at the position 1." Its been a big road block for me trying to run my crew.
Based on what I've seen from similar errors when working with CrewAI, I usually get that error when I'm adding the @tool decorator to a python function that I want my crew to call. Is that where the issue is happening for you or is this happening somewhere else?
@@bhancock_aiSo it's happening else where. I talked to MrSentinel over on his channel and he had mentioned that an older version of crewai doesn't produce that error. I went ahead and tested that out myself and it did get rid of the error. I saw there was an update for crewai 4 days ago and another today. I will test it again with the newest version. Thanks for the help though!
My laptop "die" suddenly running this, may be due to the heat. The laptop specs: Ryzen 7840, RTX-4050, 32 GB, Windows 11, Anaconda env. I noticed that the task had trouble getting some financial data. Any suggestion is appreciated -- thanks!
- **Learn how to run local language models (LLMs) on your machine**: By the end of the video, viewers will know how to run LLMs like Llama 2 and Mistol locally and connect them to Crew AI for free. - **Access valuable source code for free**: Click the link in the description to access all the source code provided in the video without any cost. - **Cover the four core technologies**: The tutorial will start by recapping the four technologies used, namely Olama, Llama 2, Mistol, and Crew AI. Ensure understanding before proceeding. - **Set up and run Llama 2 and Mistol on your machine**: Step-by-step guidance will be given on setting up and running Llama 2 and Mistol using Olama on your local machine. - **Modify and configure LLMs for Crew AI compatibility**: Learn how to customize LLMs by creating model files with specific parameters to seamlessly integrate them with Crew AI. - **Connect LLMs to Crew AI example - Markdown validator**: Connect local language models to Crew AI examples like the Markdown validator to demonstrate practical usage, such as analyzing Markdown files for errors and improvements. - **Update environment variables for local host communication**: Update your environment variables to point to the local host running Olama, facilitating communication between Crew AI and the LLMs you've set up. - Point to your Local Host where AMA is running at 14:53 - Ensure to point your open AI model name to the newly configured large language model by setting up environment variables at 15:00 - Check the logs in the server folder to validate that the configuration is working properly at 15:34 - Delete the .env file to show that the setup is still functioning, demonstrating an alternative method at 16:15 - Create a new chat open AI by providing the model name and base URL directly within the code for a more explicit approach at 16:23 - Activate the crew by running python Main after setting up the large language model at 17:02 - Ensure the open AI key is specified to avoid errors at 17:11 - Monitor the server logs in real-time to validate the execution at 17:23 - Connect the crew AI to a local llm by specifying the llm in the agents file for each agent task at 20:57 - Provide detailed context in the tasks to ensure meaningful results with local llms at 22:52 - Be aware of limitations in using advanced features like asynchronous tasks when working with local language models at 24:00
lmao "I'm gonna teach you how to run Crew AI using local LLMs so you don't rack up a huge Open AI bill like I just did." Yes, this is exactly why I'm here. 😂
Hello Brandon. I tried to register on your page but I didn't receive an email like your website says I would. Maybe something isn't working. I would like to be part of your training. If you can send the code I would be happy.
This worked in Windows11: :: File name: create-mistral-model-file.bat @echo off :: Variables set model_name=mistral set custom_model_name=crewai-mistral :: Get the base model ollama pull %model_name% :: Create the model file ollama create %custom_model_name% -f .\MistralModelfile
In this video Brandon mentions two other Crew AI videos he has created. I've taken the crash course video and it is about the best Crew AI video I have seen. There are other good ones on UA-cam, but you should not miss Brandon's videos if you are interested in Crew AI.
Thanks Edward! I seriously appreciate you saying that. I put in a lot of work to get these tutorials just right so that means a lot to me!
If there is anything specific that'd you like to see, please let me know! I'm always open to suggestions!
CrewAI can do so much so I want to crank out a lot more videos for you guys!
@@bhancock_aiclearly an AI response bruh
Ok bot
I asked and you delivered! I'm at a loss for words to describe you. Just know that you are one of the best in the biz. And from the other comments, you can see that your work is very much appreciated. Thank you.
You're the best!
Not sure where my previous comments went to but I was able to work out all the issues I ran into.
^C^CTerminate batch job (Y/N)? n
Environment variables loaded from .env
Prisma schema loaded from prisma\schema.prisma
Datasource "db": PostgreSQL database "crew_ai_visualizer", schema "public" at "localhost:5432"
Error: P1000: Authentication failed against database server at `localhost`, the provided database credentials for `postgres` are not valid.
I just subscribed. Please focus on local models when you can, this is fascinating! Thank you!
Thanks for the detailed explanation for setting up local LLMs and Agent crews, really informative for beginners like myself!
Thanks Rick! Glad it was helpful!
Thank you for sharing, looking forward to testing Crewai on my local systems. Have a great day. :-)
You are a great teacher. Very easy to follow and cover the topics thoroughly. So glad I found your channel ! Thanks for all your hard work and dedication! 👍⭐⭐⭐⭐⭐👍
Thank you for the detailed walk-trough. It took me the whole evening, two conda environments and gemini and finally chatgpt's help to set it up, but yay me. In the end, it worked, but damn, is it slow without cuda. I don't know which of my previous local llm experiments decided I don't need cuda in my life, so now I'm waiting for my major kde update to reinstall it. I think maybe the internet search won't work by default, maybe it will require some api for a search engine but as I said, it's slow.
Excellent! presented exactly as an educator would! I've been through many tutorials and all of them were too difficult to follow! You got it right by providing the workflow at the beginning as well as the programs needed! Great job!
Thanks man! I really appreciate you saying that! I always worry my videos are too long but there is just so much info that you have to know in order to use these technologies.
@@bhancock_ai videos can be long as long as there's an outline to follow and objectives to accomplish. I've been a teacher for 20 years and believe me your tutorials have been the best I've seen so far (for non techies like myself)
Thanks a ton for dropping this. I was literally working all afternoon on this very thing. That last part about which models support the different functionality was very applicable and time saver versus banging my desk.
I had also tried using both a local LLM with LMStudio (which mirrors the environmental setup) for my agents and GPT-4 for the manager_llm. Couldn’t get that to work.
I’ve been focused on making crews create a Python GUI for parsing videos as a PoC. I’d love to see you give it a whirl. Since we know that stand-alone chats are not great at full fledged more complex coding projects, I am trying to move past that hurdle by using a DevTeam Crew. This far, even with GPT-4, I’ve been unsuccessful and usually end up with just the example output I provide it.
Absolutely brilliant! Any chance you could do a video on deploying this to a server so it could be run remotely?
OMG, you are the Hero!! Nice video Brandon!!!
Thanks Taisen 😂 I appreciate it!
Great video Brandon!!!Thanks for taking your time to make it.
Of course! I have a lot more CrewAI content in the works for you guys!
If there is anything specific that you'd like me to add to the queue, please let me know!
Got both examples working. One thing you didn't cover: While using a local LLM, can you define the Crew to use sequential or hierarchical processes?
Hey mate, first thank you for making this video! First video I have by you and I really like your teaching style. I went to your web site and filled out the form but didn't get an email. I'll keep an eye on it but wanted to give you a heads up that you might have a bug. Thanks again for your contributions to all of this, and keep up the great work :)
very nice content thanks for the resource and the continued email updates 🎉🎉
any reason why I am getting this error : "It seems we encountered an unexpected error while trying to use the tool. This was the error: can only concatenate str (not "dict") to str ". It seems like it is not reading the MarkdownTools file correctly
You should show more examples of it actually working
Awesome..would love to see this running with openrouter…
Is OpenRouter just like Ollama? Does it provide some additional features or maybe it's easier to use?
That's a little bit of a lot is my favorite fray sound thank you
thank you so muchh
!!!
Thx for this one. Think i'll have to try out poetry more... seems better than venv or simple conda
Many thanks for the fascinating guide and tips really appreciate it!!
I have a question or ask for a favor if you make a guide how to deploy and use crow Ai with open source LLM for production environments?
Looking forward for more. Thx
I'm in the works on my next tutorial for you guys now!
Thanks for the great video please make a video that crewai multi agent for computer vision
Epic! thank you
Hi Brandon! Do you intend to create a video like others you've already made by creating a crewai application and deploying it on Vercel or another platform and generating an API or something like that to be used in production? That would help a lot.
Hey! That is the end goal! There is just so much foundational information that I want to cover first before making a full stack video like that.
From what I've seen on UA-cam there isn't a course that covers how to use CrewAI in a production environment. Have you seen one yet?
@@bhancock_ai Nowhere and I look all the time and can't find anything about using it in production. It would be amazing if you did it and it would be a pioneer here on UA-cam in relation to this.
thanks i learnt alot. Have you encoutered "Action don't exist"issue with local llms. If so how did you resolve
What a great video, thanks very much for sharing.
Thanks Renier! If you liked this video, I think you'll love the new CrewAI tutorial video I just released too!
ua-cam.com/video/OumQe3zotGU/v-deo.html
@@bhancock_ai Of course, I will watch the video for sure
Awesome, thank you!
You bet! I have more CrewAI content coming out soon that I'm sure you'll love as well!
thanks , that's amazing
I wonder if you can make a video to show to use crew ai as a part of an api, you trigger api to get crew ai to do it's magic
preferably using flask to code the api
thank you Brandon
Hey Youssef! You've read my mind! I plan on doing a video on this in the upcoming weeks. There are a few more foundational videos I want to do before making a tutorial like this.
Good stuff! Thanks!
Can you do an upated on for llama3? I tried updating the script to llama3 but didn't work, when I type from llama3 the IDE doesn't recognize llama3. Any help would be awesome
Thanks brandon. Are you going to build in the future something more advance like using next.js and fastapi for crewai
Hey Brandon,
Excellet Tutorial, Although I have one doubt, why do we need to create the script file and ModelFile? Can't we directly set the model to "ollama/mistral" etc
Awaiting for you reply, thank you for working hard for this tutorial :)
Hey! You could use a model file directly but your results wouldn’t turn out as well. It’s important to include the stop words that we set up in the Modelfile.
Hope that clears things up!
New subscriber. Good job
Thanks man
Can you do this with Google collab for running the LLM and code? I appreciate ur videos man!
Thank you very much!
thank you!
Hi, my machine does not have higher RAM. Can we connect mistral model via API like we do in case of Open AI? If yes, can you share an example? Many Thanks. 🙏
having hard time following the codes, using Win11 here, tried WSL already and Ubuntu installed. just stopped in modelfile...
This is great! can it work with Pydentic and instructor also? for function calling?
Hey! I have used Pydantic when defining tools for the Crew to use. Are you asking about something else?
Also, I briefly talk about how to use Pydantic with CrewAI in the first CrewAI crash course that I did a little bit ago. I'm not sure where I mentioned it in the video but here's the link:
ua-cam.com/video/sPzc6hMg7So/v-deo.htmlfeature=shared
@@bhancock_ai I meant to use it for defining the response from the llm with instructor. some times, you enter infine loop when the response is not quite formmated. it happen more with open source llms.
It would be great to get a windows version of this.
for the advanced, did you need the .env file, since i don't think it was mentioned.
so i see in visual studio you use python and for me only powershell is selectable even after installing python and adding to environment path. how can i add it so i can actually follow along what you are doing?
I'm not seeing the modelfile configuration requirement on the crew ai docs. Is making this adjustment still necessary?
Your tutorials are excellent. How can I install Ollama in Linux HPC cluster? I found pip install Ollama works fine for me. I created an virtual environment and installed all packages. My question: How can I setup Ollama for local models in HPC cluster after installing Ollama through Pip install command. Another question- Can we use VLLM?
Great video. One issue I'm having with CrewAI against Local LLMs is having them properly call functions. Curious if you've run into this, and have any tips/videos. I was able to use LM Studio to serve up Mistral, and still have it wrapped in an OpenAI API ... but even with the OpenAI interface I'm still having sub-par results. (I'd prefer to use Ollama so I can spin it out to a cloud docker container with bigger resources.)
I'm need an adult! I am stuck at running the llama 2 model file.sh it says I dont have extension for debugging 'shell script'.. which extension do i need? Any advice?
Lol! You really did make me laugh with that, "I'm in need of an adult" 😂
I actually just launched a free Skool community and it's way easier to provide support in there.
Inside of Skool, you'll see I created a post about this video. Feel free to add a comment with the issue you're having and maybe a screenshot, and we'll all be able to better help you out!
www.skool.com/ai-developer-accelerator/about
404 page not found on /v1 folder how do i resolve that?
Hey, How can we delete unnecessary LLMs duplicated during the install? Thanks
You’d have to check the ollama docs but I think it’s ollama remove
Many Thanks for the amazing video!! I have a question though, Why don't we access these open source LLMs such as Mistral and so on using Huggingface Api instead of downloading the model locally? is there a specific reason?
this approach can be deployed to run online instead of my computer?
Instead of building agents can I use langchain agent like csv agent , react and so if yes how to get it done ? Thanks in advance
Is it possible to use multiple Ollama servers or am I limited to one ENV variable?
It would be neat to diversify models within a Crew, for example "command-r" for manager agent and "codellama" for codegen agent.
Awesome!
Why do we need a custom modelfile for crewai?
Firstly, great job on the video I am just learning how to program. And with autism it gets a bit brain addling
ERROR "ModuleNotFoundError: No module named 'crewai'
Does not matter what I do to install it "visual studio terminal" pip install crewai etc any thoughts?
I m facing samw issue
@@salespusherdid you fix the error?
if you are running vscode i found that if I click on the play button it does not work but if I click the down arrow beside it and run python file it works.
thanks for your tutorial regards to crew ai, it has helped me greatly.
do you have any successful examples of using local Ollama openhermes and "from crewai_tools import CSVSearchTool". my crew is able to run, however during inserting of embedding into chromadb, it encountered "404 missing page" error.
First of all, I'd like to thank you for the tutorial! I followed the tutorial and tried to run with SerperDevTool and it is not working with neither Llama2 nor Mistral. Do you have any clue about that?
It fails to recognize/understand the tools and fails to use them correctly. What I can do?
hey Brandon thanks for the video, its a mazing to see what your doing here with the channel everything is soo well put and extremely helpful. THank you,
I have a question while following allong, there is a strange error on line 2
"Traceback (most recent call last):
File "/Users/billie/Documents/GitHub/crew-ai-local-llm-main/crewai-advanced-example/main.py", line 1, in
from crewai import Crew
ModuleNotFoundError: No module named 'crewai'"
basicly it saying it cant find the 'crewai', i have installed poetry, done everything step by step. couldnt wrap my head around am i missing a step Brandon? Thanks again for the information
Does this also work on an Windows machine?
Yes! If you go to ollama's website, they have instructions on how to setup ollama on your windows machine
🎯 Key Takeaways for quick navigation:
00:00 *🚀 Introdução ao vídeo e objetivo principal*
- Aprender a executar o Crew AI gratuitamente usando o Olama
- Executar LLMS localmente, como Llama 2 e Mistral
- Conectar esses LLMS ao agente Crew AI para execução gratuita do Crew AI
00:55 *🛠️ Quatro tecnologias principais neste tutorial*
- Olama: Ferramenta para modificar, baixar e executar LLMS localmente
- Llama 2: Modelo de linguagem treinado pela Meta, com diferentes modelos e requisitos de RAM
- Mistral: Modelo de linguagem grande, com desempenho notável em comparação ao Llama 2
- Crew AI: Framework para criar e gerenciar agentes de IA para resolver tarefas complexas
04:12 *🚀 Configuração do Olama para executar LLMS localmente*
- Baixar e instalar Olama, movendo para a pasta de aplicativos
- Configurar Olama no terminal, baixar e executar o modelo Llama 2 localmente
- Configurar modelo Mistral de maneira semelhante, preparando os LLMS para uso com Crew AI
08:53 *⚙️ Configuração de modelos personalizados para Crew AI*
- Criar arquivos de modelo para personalizar configurações específicas dos LLMS
- Executar scripts para criar e configurar modelos personalizados do Llama 2 e Mistral para uso com Crew AI
- Verificar a lista de modelos instalados usando o comando "Olama list"
11:00 *🚀 Exemplo prático: Conectar LLMS locais ao Crew AI*
- Apresentação do exemplo usando um validador de Markdown com Crew AI
- Executar o exemplo conectando o Llama 2 personalizado ao Crew AI
- Exibir saída e feedback do exemplo de validação de Markdown usando o Llama 2
Made with HARPA AI
cool vid. does this work with windows?
Hi! I can make it works with OpenAI, but once I try to run it with Ollama, it start to show me errors like:
Action '' don't exist, these are the only available......
Do you have any idea about how to fix it?
is it possible to do a tutorial hooking git up to stream or chainlit?
I haven't gotten to use chainlit before. How is it different than the agents we are building using CrewAI?
@@bhancock_ai its a chat web interface on top. Chainlit works easily on top of langchain. But i haven't seen anyone do a tutorial on a web interface only terminal.
Please update for llama3
Your videos are very informative sir, but I am running a Windows laptop and it is difficult to follow even after downloading the files from your github I am unable to run the files properly I am facing so many errors
Good video. Is it possible to create two agents using the same model? So that I only have to download the model once but they are shaped differently due to the parameters of the Modelfile. If I make two modelfiles with the same model, does it download 2 times the same model or does it download once and both agents make requests to the same model?
Hey! I'm not 100% sure what you're asking. If you want to get more support with CrewAI and using local LLMs, I created a skool community for you guys to ask your questions and get more support:
www.skool.com/ai-developer-accelerator/about
With crypto on the raise can you show crew example writing newsletter about crypto and/or have crypto coin research crew across web and twitter
If you want to see a Crew build out a newsletter, you'll definitely want to check out this video here:
ua-cam.com/video/Jl6BuoXcZPE/v-deo.html
In that tutorial, I show you how to build an AI newsletter. All you need to do is change the topic in the code and you instantly have a crypto newsletter.
Hope that helps!
Half way through the video and i realised some parts of this video doesnt work for windows. For example: %chmod command is for linux right?
i dont have the endpoint /api/generate or /chat howdo i get that im using openweb-ui and that worrks...?
How to run this on macbook air m2 ?
How do i enable auto ACTION so i dont have to press enter every minute?
How to follow the first 10 minutes of the video in the Windows operating system?
I would check out the Ollama windows instructions on their site! I don’t have a windows machine but they looked pretty simple!
The only gotcha is I think it’s still in beta
Is there a requirements.txt that you can share? Im getting package incompatibility errors.
nevermind, I see that's what poetry does for you
why is it called ChatOpenAI() if we are just using the local model and not OpenAI?
I'm trying replace open AI with llama2
I'm doing a project on danswerAI I'm using docker and cloned the danswer respositry
can you help me out
Hey this is unrelated to this video and instead focuses on a question about Crewai. Do you have any knowledge on this error:
"Failed to convert text into a pydantic model due to the following error: Unexpected message with type at the position 1."
Its been a big road block for me trying to run my crew.
Based on what I've seen from similar errors when working with CrewAI, I usually get that error when I'm adding the @tool decorator to a python function that I want my crew to call.
Is that where the issue is happening for you or is this happening somewhere else?
@@bhancock_aiSo it's happening else where. I talked to MrSentinel over on his channel and he had mentioned that an older version of crewai doesn't produce that error. I went ahead and tested that out myself and it did get rid of the error. I saw there was an update for crewai 4 days ago and another today. I will test it again with the newest version. Thanks for the help though!
How to run it for Tavern?
can we run this on our custom data?
How can we use python Ollama for this?
My laptop "die" suddenly running this, may be due to the heat. The laptop specs: Ryzen 7840, RTX-4050, 32 GB, Windows 11, Anaconda env. I noticed that the task had trouble getting some financial data. Any suggestion is appreciated -- thanks!
on second note i believe the problem is the chomd command. i have zero dev experience, can someone help pls?
- **Learn how to run local language models (LLMs) on your machine**: By the end of the video, viewers will know how to run LLMs like Llama 2 and Mistol locally and connect them to Crew AI for free.
- **Access valuable source code for free**: Click the link in the description to access all the source code provided in the video without any cost.
- **Cover the four core technologies**: The tutorial will start by recapping the four technologies used, namely Olama, Llama 2, Mistol, and Crew AI. Ensure understanding before proceeding.
- **Set up and run Llama 2 and Mistol on your machine**: Step-by-step guidance will be given on setting up and running Llama 2 and Mistol using Olama on your local machine.
- **Modify and configure LLMs for Crew AI compatibility**: Learn how to customize LLMs by creating model files with specific parameters to seamlessly integrate them with Crew AI.
- **Connect LLMs to Crew AI example - Markdown validator**: Connect local language models to Crew AI examples like the Markdown validator to demonstrate practical usage, such as analyzing Markdown files for errors and improvements.
- **Update environment variables for local host communication**: Update your environment variables to point to the local host running Olama, facilitating communication between Crew AI and the LLMs you've set up.
- Point to your Local Host where AMA is running at 14:53
- Ensure to point your open AI model name to the newly configured large language model by setting up environment variables at 15:00
- Check the logs in the server folder to validate that the configuration is working properly at 15:34
- Delete the .env file to show that the setup is still functioning, demonstrating an alternative method at 16:15
- Create a new chat open AI by providing the model name and base URL directly within the code for a more explicit approach at 16:23
- Activate the crew by running python Main after setting up the large language model at 17:02
- Ensure the open AI key is specified to avoid errors at 17:11
- Monitor the server logs in real-time to validate the execution at 17:23
- Connect the crew AI to a local llm by specifying the llm in the agents file for each agent task at 20:57
- Provide detailed context in the tasks to ensure meaningful results with local llms at 22:52
- Be aware of limitations in using advanced features like asynchronous tasks when working with local language models at 24:00
Gracias Platano.
lmao "I'm gonna teach you how to run Crew AI using local LLMs so you don't rack up a huge Open AI bill like I just did."
Yes, this is exactly why I'm here. 😂
I say it didn't work. but it gave me similar output but never finished.
can we deploy this as a chatbot? if yes how?
i'm not receiving the email of source code
Did the search code email come through? If not, let me know and I'll make sure you get it!
Hello Brandon. I tried to register on your page but I didn't receive an email like your website says I would. Maybe something isn't working. I would like to be part of your training. If you can send the code I would be happy.
How to run chmod in windows?
Its a great concept but still with many rough edges...
Is there a Windows version?
This worked in Windows11:
:: File name: create-mistral-model-file.bat
@echo off
:: Variables
set model_name=mistral
set custom_model_name=crewai-mistral
:: Get the base model
ollama pull %model_name%
:: Create the model file
ollama create %custom_model_name% -f .\MistralModelfile
It didn't work. It kept looping.
code does not work on windows.
I tried to edit my comment but it gives me an error.
is it free of censorship?
It's from Meta. I doubt it.