As pointed out by a few viewers, for certain queries the agent network may go into a loop where none of the agents are able to provide a sufficient answer and the supervisor is kind of stuck in a redirection paralysis. The way to prevent this is through a revised system prompt for the supervisor and the individual agents to have a circuit-breaker or explicit instructions that if an agent is not providing the answer, then move on to the LLM agent. This would prevent unnecessary round trips and make the graph preform faster.
Thanks for the tutorial. You have to explain how you arrive at these complex solutions! Specific use cases can be very disruptive, for example to create a travel manager in real time that is capable of searching for activities, flights, hotels...
@Jose-d5h4c, you're absolutely right - multi-agent networks can open possibilities for disruptive use cases, like the travel example you provided. A few years ago Andreessen Horowitz published the now classic "Why Software Is Eating the World" a16z.com/why-software-is-eating-the-world. We are entering a phase where AI will be eating the world, and much sooner than people predict. It took computers and software 70 years to start eating the world, but AI is already almost there in less than 10. But that's not the scariest part. When quantum computing goes mainstream, then a tsunami will unleash, changing paradigms about work and society, well beyond the currently prognosticated scenarios. The final blow will come from the developments in nano technology and robotics, where we will reach the point of being able to grow organic matter and shape it into any robotic being available. Quantum chips, running AGI, inside a completely human-looking and acting robots, that's the true disruption. So, what's to do? While this may sound dark, it's not necessarily bad news. We are nearing the end of the knowledge era that started with Descartes and Gutenberg's press. From the 15th century onwards, we've worshipped at the altar of knowledge. As recently as 20-30 years ago knowledge and hoarding knowledge meant big business. Take the MLS real estate listing services in the States. The fact they hold information meant a whole industry was profiting huge sums of money just by locking out everyone else from the valuable knowledge. But now, knowledge is everywhere. And it is almost free. So, it has become a commodity. The king is dead, long live...wait a second, but what's next? We've entered the age of wisdom. Knowledge alone is not enough anymore - the age of AI will bring us unlimited wisdom. Example: while the MLS can tell a buyer the available homes for sale, their physical properties and price, a true multi-agent AGI will combine millions of data points and understand truly if a home purchase, that home purchase, is the right or the best choice for us. Knowledge has transformed into wisdom. The next natural question is then, with all that wisdom coming [almost] for free, what do we do with it? Well, that will be a deeply personal choice. For my part, I choose to remain optimistic and think all of this will benefit the humanity and make us live in a better world. Thanks for the comment and sorry for the long answer. I hope it made a little bit of sense.
The legend is at it again! Going to watch this later, thanks a lot man! A video of how to build local agents using Ollama would be great. Many european companies have to adhere to GDPR and they don't want the data to leave the EU. If you use OpenAI, then they will store your data in the US for 30 days - which is a no-go for these EU companies. What tools and hardware are needed for local inferencing?
Local agents with Ollama is a great idea. The key will be to find a model that supports agent calling with structured outputs and tools. For small machines the 7B models typically struggle with these tasks, but 37B and 80B models do a better job, for which one needs GPUs. Thanks for the valuable suggestion!
Awesome video. One issue I am encountering is that when I type Hi, the HumanMessage is "Hello, How can i assist you today?". This gets fed back to the supervisor node, who treats this as a user message and calls the LLM tool again. And so this starts an endless conversation where the agents keeps talking to itself until i have to force stop it. However, if I try "What is python?", i get back a straightforward answer and the program ends normally. Any idea how to fix this? Is it something that can be fixed at the prompt level?
Yes, it can be fixed, take a look at the pinned comment where you can enhance the system prompt or introduce a counter in the graph state to prevent runaway conversations among agents. If you need more help, jump on to the discord server and DM me on the tutorial help channel. Cheers!
As pointed out by a few viewers, for certain queries the agent network may go into a loop where none of the agents are able to provide a sufficient answer and the supervisor is kind of stuck in a redirection paralysis. The way to prevent this is through a revised system prompt for the supervisor and the individual agents to have a circuit-breaker or explicit instructions that if an agent is not providing the answer, then move on to the LLM agent. This would prevent unnecessary round trips and make the graph preform faster.
Thanks for the tutorial. You have to explain how you arrive at these complex solutions!
Specific use cases can be very disruptive, for example to create a travel manager in real time that is capable of searching for activities, flights, hotels...
@Jose-d5h4c, you're absolutely right - multi-agent networks can open possibilities for disruptive use cases, like the travel example you provided.
A few years ago Andreessen Horowitz published the now classic "Why Software Is Eating the World" a16z.com/why-software-is-eating-the-world.
We are entering a phase where AI will be eating the world, and much sooner than people predict. It took computers and software 70 years to start eating the world, but AI is already almost there in less than 10. But that's not the scariest part. When quantum computing goes mainstream, then a tsunami will unleash, changing paradigms about work and society, well beyond the currently prognosticated scenarios. The final blow will come from the developments in nano technology and robotics, where we will reach the point of being able to grow organic matter and shape it into any robotic being available. Quantum chips, running AGI, inside a completely human-looking and acting robots, that's the true disruption.
So, what's to do? While this may sound dark, it's not necessarily bad news. We are nearing the end of the knowledge era that started with Descartes and Gutenberg's press. From the 15th century onwards, we've worshipped at the altar of knowledge. As recently as 20-30 years ago knowledge and hoarding knowledge meant big business. Take the MLS real estate listing services in the States. The fact they hold information meant a whole industry was profiting huge sums of money just by locking out everyone else from the valuable knowledge.
But now, knowledge is everywhere. And it is almost free. So, it has become a commodity. The king is dead, long live...wait a second, but what's next?
We've entered the age of wisdom. Knowledge alone is not enough anymore - the age of AI will bring us unlimited wisdom. Example: while the MLS can tell a buyer the available homes for sale, their physical properties and price, a true multi-agent AGI will combine millions of data points and understand truly if a home purchase, that home purchase, is the right or the best choice for us. Knowledge has transformed into wisdom.
The next natural question is then, with all that wisdom coming [almost] for free, what do we do with it? Well, that will be a deeply personal choice. For my part, I choose to remain optimistic and think all of this will benefit the humanity and make us live in a better world.
Thanks for the comment and sorry for the long answer. I hope it made a little bit of sense.
wa...o...classic ... many thanks for the in-depth knowledge bro.
@Condinginsight, thank you for the comment.
Really great video, thank you!
Glad it helped! Care to elaborate on what you enjoyed about the video? It helps small creators like myself make better content.
@@AISoftwareDevelopers Really clearly explained and very logical build up from scratch. Great to follow along. Thank you!
@@AISoftwareDevelopers Will clone the repo next week and try it out!
Great, feel free to DM on the Discord server, if you run into any issues.
The legend is at it again! Going to watch this later, thanks a lot man! A video of how to build local agents using Ollama would be great. Many european companies have to adhere to GDPR and they don't want the data to leave the EU. If you use OpenAI, then they will store your data in the US for 30 days - which is a no-go for these EU companies. What tools and hardware are needed for local inferencing?
Local agents with Ollama is a great idea. The key will be to find a model that supports agent calling with structured outputs and tools. For small machines the 7B models typically struggle with these tasks, but 37B and 80B models do a better job, for which one needs GPUs. Thanks for the valuable suggestion!
Awesome video. One issue I am encountering is that when I type Hi, the HumanMessage is "Hello, How can i assist you today?". This gets fed back to the supervisor node, who treats this as a user message and calls the LLM tool again. And so this starts an endless conversation where the agents keeps talking to itself until i have to force stop it. However, if I try "What is python?", i get back a straightforward answer and the program ends normally. Any idea how to fix this? Is it something that can be fixed at the prompt level?
Yes, it can be fixed, take a look at the pinned comment where you can enhance the system prompt or introduce a counter in the graph state to prevent runaway conversations among agents. If you need more help, jump on to the discord server and DM me on the tutorial help channel. Cheers!