Great content! This was my first exposure to groq... the potential here is pretty amazing! Thank you for your perspective and candid explanations that really help to grasp at the "ground truth" of these technologies. I love watching the progress in ML and LLMs, as people collectively explore boundaries and breakthroughs!
Thanks for the cideo mate, i feel like your of onky a few on youtube that actually dive a little deeper into these tools using different examples, other than the standard copy paste examples we see from everyone else. Appreciate it
I do feel LangGraph is more stable and it feels more like programming to me. I agree it is more work at the start to learn etc. I want to try and bake in RAG as a tool with the next one too
An awesome video focusing on effective use of Groq and Llama3 for agentic workflows. Looking forward to the LangGraph video even more. Some minor issues toward the end of the lab, but still teaches key concepts and capabilities. (couldn't find the CSV file) Would be interesting to integrate LangSmith as well to compare answer quality between Mixtral and Llama3.
What an great explanation! Perhaps next time, you could delve into the topic of constructing an internal knowledge retrieval system (RAG system) for information retrieval, offering an alternative to relying solely on web searches?
This is a reminder, from a human, to other humans who are using this channel to learn how to implement things like this, to not fully hand your brain to AI. Customer complaint emails might be able to be handled automatically, but the humans running the system can't fix things if they don't know what people are complaining about. Make sure to include some feedback mechanism in what you're doing so that humans can maintain observability of the AI system and the world that the AI system is processing.
Yet if the humans observe, for example, hallucinations from the AI, there is no way to troubleshoot that due to the black box aspect of high parameter AI. The AI will most likely stop its hallucinations but perhaps after two or three (ten😂?) customers and then the only solution is prompting but there will never be the reliability of programming or finding and fixing broken code.
Hi @Sam, I can't appreciate more, I would like to thank you for all the effort you are putting on to share these incredible contents, it helped and is helping me building my project around agents.
First off, great video - but I have a question and I realize this is a noob question so sorry 'bout it. In the part right around 4:00 you say, "okay here we're setting up our groq api key..." But I'm confused where to put this key. I have the api key already but I can't find where you put the API in the code. Does it go in the os.environ or does it go in the userdata.get? Or both? Or neither? Thanks so much for the video again. If you could help me figure out this part that's the only thing I'm confused about. Thanks.
And make sure you comply with GDPR when you pass on all that private, confidential, person-related customer information they send you in the email to some external service such as "groq"... You will need to make customer 40 pages beforehand and sign a release before they are allowed to send you email.
And this is why a local LLM makes a lot of sense. The other element is, whatever you do needs to be discoverable. When the law suits start flying, the logging will need to be of high quality.
@@gavinknight8560 You can always ask LLM to fake the logs. The reality is that GDPR is only used to extort money and kickbacks from big companies because there is absolutely no way to check compliance (e.g. if I say, I deleted all your data, there is no need - or technical possibility - to prove that I did).
When am using Groq with Llama3 and getting this error message when I execute the agent: ERROR:root:Failed to get supported params: argument of type 'NoneType' is not iterable what am I doing wrong?
Good video. Can you maybe address sql based RAG with llama3 and crewAI? Such as a recommendation system for a product on an SQL inventory with maybe fall back as internet search?
Thanks for all your uploads Sam your explanations are always amazing! I was wondering if you provide consulting sessions or advice on other AI projects as well, I've got a conversational AI agent that I've been trying to build up for a specific use case in valuations that I would love to talk to you about.
Hey, new to agent space and I was thinking same thing. Can you give me some ideas where frameworks like crewAI would actually be useful and not just an unnecessary layer of abstraction? Current understanding is if you need to implement some sort of loop/cycle into the workflow.
How can we get it to make a complex wordpress plugins with multiple files without losing progress along the way as it just forgets stuff and leaves placeholders causing us to just do circles if its to complex
Hey Sam. Thanks again for the video and I was wondering if next video with the langraph or video after u could show us hw can we do the internal rag as you mentioned for production lvl apps
This is very impressive indeed, thanks for sharing! I suspect the upcoming Ollama 7b version might not be quite as accurate but this gave me an idea. I was thinking it might work to generate 2 replies for each of say 50 emails from a huge AI, and use the manually chosen replies as examples for the smaller models. It feels like cheating but I think it might give the smaller models an extra accuracy boost they might need.
I would say it isn't production ready currently. I use it more for trying ideas out quickly and then remaking them in LangGraph or my own little framework
What's the need for crewAI here, or for similar examples? From my understanding, this could be passed into a simple sequential LLM chain and be much simpler. I'm new to AI agents and LLM applications so bear with me, just a genuine question. Any replies would be awesome!
Its the decision points and parsing (and acting on) those decisions. In many ways I set this up to compare with the LangGraph example that followed it. The CrewAI framework can also be used to decide the next steps itself, though I feel this often isn't reliable.
Please, do it with Ollama locally. It would be really nice to have some more multi-agent examples. By the way, as per what you asked in the other video about other languages, llam3 is working ptretty nice in Brazilian Portuguese.Thanks for all!
Never really use crewai, but I have an email assistant like this with 2 brains(still upgrading), 1 for tools for researching/leaving notes/reading notes/ect and then another for the response to the email after reviewing the returns from the tool agent. Replies directly to the clients and has RAG via ollama! Although it's all python. I love this type of agentic workflow. I really should learn more about crewAI but Iunno, seems annoying to me still for some odd reason haha. Any other tips that werent in the vid for coders that are hesitant to use crewai?
I don't understand why do we need to complicate a simple email reply task with crews and agents??? a simple prompt is sufficient to categorize the email and reply based on the prompt.. someone please explain why the complication??
I don't disagree, this could be done with just a chain of prompts. I tried to keep this simple. Where the Agent elements come into their element more is with multiple decision point that the LLM is choosing the flow path.
@@alizhadigerov9599 I guess the only thing to do is employ a sliding window of some kind. Maybe compress old content. There are articles about context size extension. I was using ollama, it may have a problem with how it handles context size.
Why are you stuffing all that in there? You can summarize the conversation or do other techniques to manage that. It's just lazy to stuff things in and let the model take care of it
I suspect that signing machine-generated emails as "Sarah, the resident manager" when there is no "Sarah" is at the very least unethical, and potentially illegal (depending on the context).
@@sd5853 You mean Indian scam centers? Yes, scammers usually assume a different identity from their own because it aids their scam. Do you want your company to be perceived as liars and scammers?
There are some issues when you do stuff like this in real life : 1. Groq is not that actually fast when you use it with crew ai 2. In every use case I have tried , just using code is faster and more accurate than using a team of agents . 3. Most LLMs can’t even consistently format the Agent messages properly , resulting in massive waste of tokens as wrongly formatted messages repeat over and over . This makes the utility of this method for writing reliable production code very limited right now. That is the REALITY I am seeing with dirty hands . Please share your own thoughts or experiences with me!
I wouldn't use CrewAI for production at all currently. It is like a idea testing tool/toy. It makes the trade off to get fast and easy creation of agents by giving up full control, custom checking, validations etc.
Great content! This was my first exposure to groq... the potential here is pretty amazing! Thank you for your perspective and candid explanations that really help to grasp at the "ground truth" of these technologies. I love watching the progress in ML and LLMs, as people collectively explore boundaries and breakthroughs!
Thanks for sharing, looking forward to the follow-up videos with langraph and ollama, thank you for your work
this is really inspiring. it opens up all sorts of possibilities, in terms of document processing , and combining it with web search
Thanks for the cideo mate, i feel like your of onky a few on youtube that actually dive a little deeper into these tools using different examples, other than the standard copy paste examples we see from everyone else. Appreciate it
Super cool! Wild to see it run through the whole agent flow in 11 seconds.
yeah I kept thinking it was a previous run when i was testing it, until I realized it is just that quick.
Not so wild if you're used to llama.cpp rather than Python crap.
@samwitteveenai. Excellent video looking forward to the LangGraph video on the same use case.
@@clray123
I want to do stuff, not read instructions.
Really looking forward to your LangGraph video. I think it’s the best option, but also the hardest to learn
I do feel LangGraph is more stable and it feels more like programming to me. I agree it is more work at the start to learn etc. I want to try and bake in RAG as a tool with the next one too
Yes, looking forward to langGraph video.
An awesome video focusing on effective use of Groq and Llama3 for agentic workflows. Looking forward to the LangGraph video even more. Some minor issues toward the end of the lab, but still teaches key concepts and capabilities. (couldn't find the CSV file)
Would be interesting to integrate LangSmith as well to compare answer quality between Mixtral and Llama3.
Glad you liked it. There is no csv file, not sure what you are referring to there.
What an great explanation!
Perhaps next time, you could delve into the topic of constructing an internal knowledge retrieval system (RAG system) for information retrieval, offering an alternative to relying solely on web searches?
Great job! I'm super excited to try CrewAI. With LLAMA 3, it’s so promising!
The future is being written now, friends :)
This is a reminder, from a human, to other humans who are using this channel to learn how to implement things like this, to not fully hand your brain to AI. Customer complaint emails might be able to be handled automatically, but the humans running the system can't fix things if they don't know what people are complaining about. Make sure to include some feedback mechanism in what you're doing so that humans can maintain observability of the AI system and the world that the AI system is processing.
Yep, super well said. That's how to flywheel of service improvement works, and AI will make it spin way harder.
Yet if the humans observe, for example, hallucinations from the AI, there is no way to troubleshoot that due to the black box aspect of high parameter AI. The AI will most likely stop its hallucinations but perhaps after two or three (ten😂?) customers and then the only solution is prompting but there will never be the reliability of programming or finding and fixing broken code.
Perhaps the answer is bots watching other bots and fixing their mistakes 😅
Hi @Sam, I can't appreciate more, I would like to thank you for all the effort you are putting on to share these incredible contents, it helped and is helping me building my project around agents.
Thanks @Sam Witteveen, it has been very informative, I will start working on my RAG based project now with the help of your colab notebook!
Llama3 is Amazing for sure, but so is Sam. 😀Thx for sharing
Llama3 is Amazing. I have replaced so many tasks that I used to use ChatGPT for with Llama3 70B.
totally agree 3.5 is looking pretty old now
Videos is great as always, but this time the thumbnail is awesome.
truth be told the thumbnail is what convinced me to make the video. 😀
Dude thank you for all of your videos. You’re awesome.
Glad you find them useful.
First off, great video - but I have a question and I realize this is a noob question so sorry 'bout it. In the part right around 4:00 you say, "okay here we're setting up our groq api key..." But I'm confused where to put this key. I have the api key already but I can't find where you put the API in the code. Does it go in the os.environ or does it go in the userdata.get? Or both? Or neither? Thanks so much for the video again. If you could help me figure out this part that's the only thing I'm confused about. Thanks.
Hi Sam another excellent tutorial. I've posted it in the crewai discord channel. Thanks Paul
Thanks Paul
And make sure you comply with GDPR when you pass on all that private, confidential, person-related customer information they send you in the email to some external service such as "groq"... You will need to make customer 40 pages beforehand and sign a release before they are allowed to send you email.
And this is why a local LLM makes a lot of sense. The other element is, whatever you do needs to be discoverable. When the law suits start flying, the logging will need to be of high quality.
@@gavinknight8560 You can always ask LLM to fake the logs. The reality is that GDPR is only used to extort money and kickbacks from big companies because there is absolutely no way to check compliance (e.g. if I say, I deleted all your data, there is no need - or technical possibility - to prove that I did).
@Sam, thanks. That was excellent help, as always.
When am using Groq with Llama3 and getting this error message when I execute the agent:
ERROR:root:Failed to get supported params: argument of type 'NoneType' is not iterable
what am I doing wrong?
Thanks Sam, but there is any option to connect this python code with real gmail or other inbox?
Good video. Can you maybe address sql based RAG with llama3 and crewAI? Such as a recommendation system for a product on an SQL inventory with maybe fall back as internet search?
Great video, added points for clever references to The Beatles.
Ringo thanks you very much.
Thanks for all your uploads Sam your explanations are always amazing! I was wondering if you provide consulting sessions or advice on other AI projects as well, I've got a conversational AI agent that I've been trying to build up for a specific use case in valuations that I would love to talk to you about.
Thanks for the kind words. Best to contact me on Linkedin for any consulting etc.
This is not an agentic flow, this is just a regular pipeline built with CrewAI.
Hey, new to agent space and I was thinking same thing. Can you give me some ideas where frameworks like crewAI would actually be useful and not just an unnecessary layer of abstraction? Current understanding is if you need to implement some sort of loop/cycle into the workflow.
awesome video, really helpfull
Anychance of showing how to integrate langchain tools into crewai? Specifically gptresearcher?
Great but is it possible to allow internet search in ollama web UI?
How can we get it to make a complex wordpress plugins with multiple files without losing progress along the way as it just forgets stuff and leaves placeholders causing us to just do circles if its to complex
Hey Sam. Thanks again for the video and I was wondering if next video with the langraph or video after u could show us hw can we do the internal rag as you mentioned for production lvl apps
How did you create the thumbnail?
Amazing video ❤
This is very impressive indeed, thanks for sharing! I suspect the upcoming Ollama 7b version might not be quite as accurate but this gave me an idea.
I was thinking it might work to generate 2 replies for each of say 50 emails from a huge AI, and use the manually chosen replies as examples for the smaller models. It feels like cheating but I think it might give the smaller models an extra accuracy boost they might need.
yeah creating good ICL exemplars works really well. I have done a project recently that makes good use of this with the Haiku model.
@@samwitteveenai Makes sense it wasn't an original idea! Thanks for classifying it for me, now I can look into it further :)
Sam is CrewAI is production ready? it causes a lot of internal server errors in production
I would say it isn't production ready currently. I use it more for trying ideas out quickly and then remaking them in LangGraph or my own little framework
What's the need for crewAI here, or for similar examples? From my understanding, this could be passed into a simple sequential LLM chain and be much simpler. I'm new to AI agents and LLM applications so bear with me, just a genuine question. Any replies would be awesome!
Its the decision points and parsing (and acting on) those decisions. In many ways I set this up to compare with the LangGraph example that followed it. The CrewAI framework can also be used to decide the next steps itself, though I feel this often isn't reliable.
Please, do it with Ollama locally. It would be really nice to have some more multi-agent examples. By the way, as per what you asked in the other video about other languages, llam3 is working ptretty nice in Brazilian Portuguese.Thanks for all!
Interesting to hear it is doing well in Brazilian Portuguese
Never really use crewai, but I have an email assistant like this with 2 brains(still upgrading), 1 for tools for researching/leaving notes/reading notes/ect and then another for the response to the email after reviewing the returns from the tool agent. Replies directly to the clients and has RAG via ollama!
Although it's all python.
I love this type of agentic workflow. I really should learn more about crewAI but Iunno, seems annoying to me still for some odd reason haha. Any other tips that werent in the vid for coders that are hesitant to use crewai?
I don't understand why do we need to complicate a simple email reply task with crews and agents???
a simple prompt is sufficient to categorize the email and reply based on the prompt..
someone please explain why the complication??
I don't disagree, this could be done with just a chain of prompts. I tried to keep this simple. Where the Agent elements come into their element more is with multiple decision point that the LLM is choosing the flow path.
great tutorial thanks a lot
Nice! Do you know if llama3 powered by groq is usable with autogen instead of crewai?
I haven't tried it but I think it should be.
Thank you! It would be nice to have the same kind of tutorial with Autogen. I really appreciate the quality of the work you are providing
Thank you Sam
why the 3rd and 4th have orange sunglases ?
They are the rebels.
damnit I was busy coding >,< lol brb watching
i know the feeling.
Llama 3 70b Instruct starts producing junk output once the conversation gets beyond 8k. Pretty unuseable with gpt-pilot, for example.
how can you go beyond 8k if context length is maximum 8k?
@@alizhadigerov9599 I guess the only thing to do is employ a sliding window of some kind. Maybe compress old content. There are articles about context size extension. I was using ollama, it may have a problem with how it handles context size.
Why are you stuffing all that in there? You can summarize the conversation or do other techniques to manage that. It's just lazy to stuff things in and let the model take care of it
@@choiswimmer when I need to have gpt-pilot agents lectured on being lazy, I am sure to get in touch with you.
Bit like the current state of chatGPT
419 scammers / scambaiters you better listen up...
I suspect that signing machine-generated emails as "Sarah, the resident manager" when there is no "Sarah" is at the very least unethical, and potentially illegal (depending on the context).
I mean when Indian call center guys try to reach out to you do they present themselves as Radesh from Mumbai or as Paul from Missouri ?
@@sd5853 You mean Indian scam centers? Yes, scammers usually assume a different identity from their own because it aids their scam. Do you want your company to be perceived as liars and scammers?
fuck my brain i cant understand!!
There are some issues when you do stuff like this in real life :
1. Groq is not that actually fast when you use it with crew ai
2. In every use case I have tried , just using code is faster and more accurate than using a team of agents .
3. Most LLMs can’t even consistently format the Agent messages properly , resulting in massive waste of tokens as wrongly formatted messages repeat over and over .
This makes the utility of this method for writing reliable production code very limited right now.
That is the REALITY I am seeing with dirty hands .
Please share your own thoughts or experiences with me!
I wouldn't use CrewAI for production at all currently. It is like a idea testing tool/toy. It makes the trade off to get fast and easy creation of agents by giving up full control, custom checking, validations etc.
@@samwitteveenai OK , thanks for the reply ! Ya makes sense 👍