I've been getting a lot of mixed comments on this video (I know the title is a little triggering, but hey, that's how UA-cam works). I'm glad I'm not the only one that thinks this is the right approach.
No1 said you had to setup an Agent Team as a dynamic randomized flow... you can have agents pass variables down the line... the best part is, it doesn't have to fit neatly into a pydantic model with error handling everywhere... if the previous agent just blobs back a full text instead of a dict - the next agent can still get the info it needs and make it work. Handling nulls, errors, variations, if/else logic, retries and structured input/output is what agents do away with.
@@daveebbelaar too bad you don't know what you're talking about. The purpose of AI agents is NOT data pipelines or automation. Agent frameworks are precisely designed for work that is not predictable and can NOT be put into a DAG. If the idea can be put in a data pipeline, then don't use an agent framework at all, its overcomplicated for no reason.
Use an AI agentic framework when dealing with complex, non-linear tasks with unpredictable workflows that require dynamic decision-making and flexibility. Use traditional pipelines when dealing with linear, predictable processes that demand consistency, precise control, and efficiency.
Hi, we thought to build a platform for AI agents. So I would like to ask you, do you think there are real use cases for AI agents??? What are the technical challenges that will be there if we are creating thousands of them???
I mostly agree with everything. But there are two kinds of pipelines. The first one is when you have a finite amount of transformations, and the second one is when you don't know all the transformations in advance and need to delegate decision-making (in this case, you need an agentic approach). However, every pipeline can be represented as a finite transformation when you know it. for example classification. and t that is the key. So, if your pipeline is research-like, then you can't know it in advance; in other cases, you can.
I like this distinction. I build UI heavy pipelines with a lot of human input where there's a ton of variability/decision-making in both the E and the T, and only the L is known in advance.
I completely agree with your viewpoint. The whole point of these frameworks are to get LLMs to perform such abstract tasks which we have no predefined idea about. There should be data pipelines that satisfy the needs other than that and that is obvious - If you don't need the computation power of LLMs why use them?
This is a great distinction. I am wondering though if we really have use cases for the second type at the Moment? In other words, I have the impression that agentic approaches are mostly used for problems that don’t need agentic approach and can be solved much simpler.
While your critique of agent frameworks is spot on and compelling, it seems there's a misconception about their potential. Your custom system resembles langchain+langgraph, highlighting a need for deeper understanding before dismissing existing frameworks.
I was thinking the same thing or the ability to do “flow engineering” on autogen. I do think it’s a good idea to build your own but it will also be a lot of work so it’s all trade off but it’s more or less the same thing by the end of the video
There are some valid points in this video that agent frameworks are not remedy to all solutions, but I don't agree with lots of the things there: 1. the title is just click-baiting and misleading and I really hate it - there is nothing showing agent framework will fail, just that they are too heavy for some simple tasks like what is demoed here, and is too complicated for some data scientists to fully adopt to 2. the whole video is talking about why a DAG is more straight-forward approach, but did not talk about why agent framework and abstractions are bad. Abstraction always comes at the cost of design complexity, but also with the benefit of things like decoupling, extensibility, etc. 3. it is trying to advocate for building your own DAG framework, while there are already many DAG orchestration frameworks available and I don't see the point building such thing by yourself. 4. I doubt if the author understands those agent frameworks well enough to call them fail. The agent frameworks aim at resolving enterprise level problem at large scale, and it provides a lot of features to make a solution stable, maintainable, and scalable. Making it work on as a PoC is only the first step towards a production ready system. Discarding frameworks and trying to re-invent the wheels on all these pieces are not the right way to go.
agent concept is a misleading concept and we shall come up with more advanced concept than agent . if you hold up with agent concept , the workflow will be really complicated .
I completely agree with your point. I was just about to mention that there is a common misconception or a lack of information regarding the capabilities of agents. LangChain, combined with LangGraph, can indeed perform the same tasks effectively.
Could you specify what you think is the misconception about their potential? Agent frameworks add inflexibility in a space that is very early in its development and still evolving. As a result, in my view, the potential must be really big to counter the disadvantages. Or, in other words, what outcomes can the frameworks provide that I cannot create by building my own pipelines or sequences, or however you approach it?
Cyclical/recursive algorithms are needed for many problems which in part, is what agentic frameworks attempt solve. Your sequential processing only paradigm is applicable only to certain problems.
I agree with tour core message. But i dont think you've used langchain at the designed level or maybe dont know about langraph? Its not opinionated and you can (and should) orchestrate the flow however you want, you can make it linear, acyclical (every langraph example), you can decide the flow however you want, deterministally, defined by the LLM output etc. None of my agents are even driven by any default langchain agents, i have my own prompt, output format, tools etc. The framework is there to: 1. Standarize the way you interact with the models 2. Have a trackable verifiable analyzable way to build those graphs
Agreed. Take a simple airtable, input cell connected to an llm, and an output cell for its response. There you have your first step of an agent. The hours I lost on learning Langchain, Flowise, you name it
Dave, thank you for making this video. I can't tell you many times I thought I was the problem when trying to get AutoGen and CrewAI to do anything beyond the most basic tutorial. The more I look at these frameworks, the more I realize how green this field is.
Basically what this video is saying is this, "I do not understand the Agentic Framework Flow yet, so I will just critique it in the meantime because I do not understand it"
I am curious, what actual results (eg in terms of performance) does the Agentic workflow provide for you that you couldn’t generate with a more flexible approach (eg pipeline, as outlined in the video)
For Now Agentic NoCode Frameworks are seeing explosive growth (replacing traditional pipeline automations) because of ease of use, I predict this will also be disrupted by LLMs soon with their own fully automated Agent builders that create 100% of the workflow automations including seamless connections to sub agents, tools and other integrations as needed beginning from a simple questionnaire prompt level. I’m a newbie to all this, so I may be over simplifying how this evolves, but it seems to me AI (insert a leading LLM company here) realizes it’s full potential when they can automatically build their own Agentic managers capable of solving very simple problems to very hard and niche problems in a fully automated way. These Agentic Managers will automatically build the worfkow and add sub agents and help users easily integrate with their systems and tools then easily deploy on their websites and apps to serve their customers, all from a few q&a prompts. A perfect solution for SMBs who don’t have an IT dept
Seems like he understands it though. He argues that for most actual business applications, a pipeline model is easier both to implement and reason about, as well being more in line with what you actually want
Interesting. Working on a CrewAI project atm and I found I was using a DAG approach to tasks because of my experience with Kedro. One task, one transformation, one output and keep working sequentially. In a nutshell, you're describing Kedro's approach and philosophy. Its just not fine-tuned for generative AI use cases yet. What I've found with multi agent apps is that I end up building tools that do all the heavy lifting and the agent Is used to generate a piece of data (like a query string) used in subsequent processing. The challenge is building guardrails to prevent an agent from going off the reservation when something doesn't work. If you give an agent access to a tool as simple as a search tool, if it gets stuck, it could end up calling the tool in a loop and there goes your credits. So we're still having to treat agents like toddlers... would be interesting to see your take on kedro.
Hi! I believe you are Italian like me, and I also already built agents with crewai related to content creation for ig and x. Atm I’m building a lead gen agent linked to another agent that then sends email to those leads We can share knowledge, it’s not so common find people of our country which are working on CrewAi. If interested reply to this comment 🙏
@@ricasco Hi, could you maybe tell me more about how exactly your lead gen agent works? Which sources does it use to find the leads? Sounds quite interesting :)
I am working on a project that uses a combination of agents and pipelines. It has agents that each roleplay a specific function of mind, each agent working together to simulate a human mind. They are divided up according to the ancient Yogic philosophy of mind, Ahamkara for Ego, Manas for processing, Bhuddi for decisions, and Chita as the store house of memories.
cool af. Few days ago I was thinking about if someone out there was implementing something similar (although I was mainly thinking about the western philosophy portrait of human mind, i.e. mostly freudian concepts like id, ego and superego and/or even jungian like shadow etc). Is there any update to the project?any repo that we can learn more about?
I strongly agree with you using the ETL approach. Considering building a pipeline each step or agent flow can be accessed in any order, given the developer more flexible and ease in assigning tasks to agents. Thanks for sharing Dave
Many points you mentioned make a lot of sense. However, as stated in a previous comment, this approach can lead to numerous transformations in a scenario that might require multiple steps. In other words, you would always have to go through a three-step transformation. And not all tasks need three steps. The issue with fixing it this way is that, first, it can cause delays in response time, and second, you won't fully leverage the best aspect of artificial intelligence, which is its ability to, for example, assess the difficulty level of a question. My proposal for improving the workflow is to use the first question asked, that is, the input for the first step when you mentioned 'transformation' or 'manager.' I would pass a prompt in this step. And in this prompt, as a response, it would have to classify the difficulty level of the question, with levels ranging from 1 to 5. I would create a logic where, if the difficulty level is between 1 and 3, or between 1 and 2, for example, there would be no need to go through all these steps. Because there are many trivial questions that wouldn't require so many steps, which is how the human mind works when we ask someone a question. If the question demands more time for reasoning, the person takes time to think. But when, for instance, you ask someone how much 1 plus 1 is, they quickly respond that it's 2, without needing to go through three steps for trivial questions. So, in the first prompt, I would include a difficulty rating mechanism. You would then establish a programming logic for each of these difficulty levels, allowing an agent with more resources than other agents to handle the reasoning, even based on previous contexts. And in this same step, using the response, you would receive both the difficulty rating of the question, which would be passed to the next step, that being the generation of the response. In this return, there could also be an analysis, based on what is being said, about the quality of the previous response given within the context, assigning it a level of assistiveness. For example, if you ask someone to activate the email and they don't understand correctly, then, as previously mentioned, the response would receive a rating indicating that it wasn't a good response. This would be added to the agent's context, so when it generates a new response, it would take into account that the previous answer wasn't adequate. This way, with a single step, we would have feedback on the previous situation and the current situation to process.
Here's something you can help me understand, as an intermediate-level coder learning all of the nuances of AI/ML and their applcations. You're extolling the value of the directed acyclic graph approach towards data processing pipelines, to avoid sending data to earlier stages. As a fan of idempotency and functional programming, I _think_ that I somewhat understand where you're coming from in your premise. But in my studies of models, I'm also seeing a lot of buzz around the differentiation between methodologies of KANs vs MLPs. My question is this: wouldn't there be some value in using information uncovered later in the pipeline to refine what you're doing earlier on? For instance, let's say you're entertaining guests, and planning to serve appetizers. A very early step might be purchasing ingredients. Later on, you realize that not all of the guests show up. If we're just going to keep moving forward, we make more appetizers than are needed. The alternative: when less guests show up or RSVP, instead of making as many apps as your ingredients/plans dictate, you make less. Now you have less appetizers and you store or freeze the ingredients you didn't use. You _could_ make them, and freeze the unused portions. But by sending the information collected later back to an earlier step, you instead have the raw ingredients to use in other recipes instead. This is a really lousy and forced metaphor, but it's all I could come up with off the top of my head. It just seems like there's value in the concept. On a different level, isn't this just sort of a form of backpropagation? The ability to reinform earlier calculations with the results of later ones?
Glad somebody finally brought that up. 90% of things that folks use agents for can be done with proper flow engineering. For all AI tasks (F1000 prod quality) I use DSPy which allows me to define the flows very nicely, similar to PyTorch. For larger, more complex systems I use Prefect for the workflow, but still DSPy for the individual AI calls. Agents have their place, but most of the time when you think hard about the problem, you don't need them.
You have a solid point about agentic frameworks usually not being the right tool for tangible business applications. It's about automating the repetitive.
I have been involved in enterprise software buildout/integration processes in the Fortune 500 (including automated ETL flows for financial reporting) and what you're saying here makes a ton of sense.
Yep, it also follows OOP principles -- injecting data into an encapsulated object and getting an output. You could then have objects strung together, each doing their specific job. So a GPT is in a way an object that does a narrow thing and produces an output that could be injected into another GPT.
I think we will see special models trained for the agent workflows. Right now, they are trained with way too much knowledge for this workflow. Then the latency will also go down. I'm currently wondering why we haven't heard anything about this approach yet.
You said this is a work in progress, and I'm wondering if you've compared the results of traditional Mixture of Agents responses with your pipeline approach for various common use cases.
Good take. All those frameworks are good for getting familiar with the principles but if you want to make a unique specialized product you need to code everything on your own. Probably you won’t need agents for some tasks even.
Thanks for sharing. There’s a tradeoff that a developer needs to balance between reasoning agility and hallucinations minimization that results in how much one wants to constrain the dialog flow. Your case is naturally well suited to be solved by two steps, always the same, ETL-like pipeline. If you test your paradigm with a real chat interaction where a user wants to order at Starbucks you will easily get tired framing it with the ETL-like paradigm. Indeed there’s art in being able to see apparently complex dialogues as more linear pipelines but the overall feeling it that you’re losing much of the flexibility that LLMs can provide you.
The discussion about the data pipeline is accurate, but it cannot be used to prove that a multi-agent system is ineffective or failing. You are still thinking like a software/data engineer instead of an AI engineer. Consider this: when developing any new data pipeline or system, do you ever need an LLM to help? If yes, then there must be a way to integrate the LLM directly as part of the pipeline or system too.
The concept of an agent is very naive due to the current limited capabilities of LLMs. I've seen too many solutions to current problems that are just stacks of LLMs with higher costs, higher latency, and suboptimal solutions. When thinking about humans as agents, each agent should have specific capabilities (not just role-playing) and specific workflow inside to deal with complex problems at light speed. However, looking at current LLMs, they are slow, lack of specific capabilities and workflows with less reliability .
Everything is great. I have built a few tools myself with Instructor. To really automate business processes, however, I see the problem with data protection. In the EU, I can't just put a complete e-mail into an LLM. How do you solve this? It would be great if you could shed more light on the subject of data protection! Thank you very much for your excellent content!
Exactly what I build for my company. Small AI that discovers PII in a text that I want to give our offices for use in their intranet before sending it out to public LLMs. Beforehand I tried Presidio and octopii. Both use regexp. Got bad results.
I have been in dilemma when and where to use the existing agent-based frameworks vs ETL kind of workflow. I have been given similar work as my thesis as to developing a Multi-Agent RAG System for cross-domain information extraction and retrieval. Could you provide guidance on the following: Agent Specialization: How can I design specialized agents for different Confluence spaces and internal services (e.g., sick leave, vacation request, IT ticket)? Coordination: What strategies can I employ for a coordinator agent to manage cross-domain information retrieval effectively? Domain Adaptation: How can I implement transfer learning techniques to adapt agents to new domains? Framework Flexibility: What considerations should I keep in mind to create a flexible framework that can accommodate new spaces and internal functions? Additional Questions: Are there any existing frameworks or tools that could be adapted for this purpose? What are some potential challenges and best practices to consider during development and implementation? I'm eager to explore different approaches and learn more about this topic. Please suggest!
Super interesting, I have come to the same conclusion about most "agentic" frameworks, the react approach is to inconsistent for production applications. Have you tried langgraph? It goes into a very similar direction like you datapipline approach. And together with function calling and structured output it allows you to build super powerful apps.
I get where you are coming from, but... 1) CrewAI has sequential processing. Works pretty similar to traditional pipelines. 2) There are a number of use cases where criss-crossing agents is necessary (i am thinking validation tasks). Finally, traditional pipelines are nice but I always find myself solving problem that others have already solved.. so it's either copy paste into my custom Pipeline or embrace the OS. I find a lightweight framework like crewai is very useful for many of today''s tasks (in production). With the ability to include and write custom tools easily - it';s like having a traditional pipeline on steroids.
Can you help me understand on what's the difference between the idea of building agentic applications using LangGraph vs the one you proposed in this video? You did mention about LangChain style of making agents but LC completely revamped their agentic application building framework using LangGraphs where one can get full control of the behaviour or the agentic workflow with principles from DAGs.
Very nice and detailed video, but whatever you explained as exactly the same as LangGraph. Rather than writing from the ground up its better to use LangGraph to determine the flow between intermediate steps.
pipelines are good for linear - step a then step b then step c - tasks. using a team of agents is imho meant for nonlinear tasks, where you might need step a then step c and then step b, or even adding or removing steps from the - here it comes - DYNAMICALLY formed pipeline based on the decisions of an (managing role) agent. using the wrong tool for the wrong task is always an easy way to critique something or someone.
Where would you ever use such a system except for chatbots? Business is deterministic and everything in this World is a business process. 99.9% of the time you wouldn't need dynamically formed pipelines at all.
I found that critical agent feedback is exactly what you need to *constraint* the output. It should shut down all the hallucinated, mal-formed and simply incorrect outputs. Also, tool use is better in agentic architectures - you can have dedicated agents to format tool calls and process their output before it's fed back.
I followed through. Your system looks, great, with content creation processing with your AI generative pipeline. However, I think the point of Agentic, which is not there for sure, is to be able to work in a non-rigid system, the bottom-up approach, but where all systems are communicating together. So it is a build-up on pipelines that you have. Which is amazing by the way 😊 And putting them all together in a system that works autonomously. The idea is to get AI to the point where it will work as a team without instruction. The whole point is why Sam and all the others are building these huge systems now. One thing I want to get my head around is nongenerative AI; is content base. One thing I am seriously delving into now is API endpoints of all kinds with AI LLM model support. With some tasks, they are not required. But with many where data is involved, they are. Hope this makes sense. Not here to put a dent in your wonderful work. You are great at coding and putting the AI infrastructure together. Look forward to following along with you. AI agent workflows are the way forward now.
I have reached the same conclusion, but for other reasons. In fact I dare to say that these frameworks have been slowing down actual innovation. They are very helpful when you are starting end experimenting, but when it comes to go to production they will give you a hard time. Managing token count, tool calling, structured output, entry/exit rules, logging and all the "boring" stuff production needs will be very messy and will force you to rewrite the native modules. My take on data pipeline: it will depend on the task, but I have 2 approaches: (auto managed agent) setting agents as tools and work with tool calling or state machines, to control steps and transitions (what you built from scratch).
ou have really pinpointed the current issues with agents-there’s often a huge gap between aspirations and reality. There are so many agent frameworks out there, but the key to consistent performance still lies in having an effective and stable workflow to guide it, which should essentially be a data flow. Also, could you share the code from the video?
So what you're trying to say is it's simple to build agentic workflows, hence do it yourself and don't use the existing frameworks? If yes, your video/take would have been much better and persuasive if you dug into the implemetation of the said frameworks while pointing out the cons. Your example is very basic and doesn't even need an agentic process tbh. You can write a script to handle this like you did. All am asking is a more in-depth comparison and not a one-sided take.
Agree on the idea not to use frameworks and build custom, however, the data pipeline / sequential DAG based approach will not achieve the fluidity that gen ai promises.
One very good usecase for agents is the ability to take a decision on using the tools which they have at their disposal. A very lightweight less bloated framework which can do this (Basically function calling but with consciousness) will win the race. I am thinking of a design pattern instead of a framework will work. This is coming from my experience of putting crewai in prod and see it fail miserably at times
But why would you even need that ability except for when building chatbots? Your business processes are not dynamic. They are pretty static. What is dynamic is the data being fed into the process. If your business process is itself dynamic then your problem is not AI but your business process itself.
@@Shri you dont really need agents for such tasks. Normal bots would do the same thing (static requirements and needs). Think of an agent which can generate ppts for you project, searching internet, compiling from various sources and writing the ppt content. This is what we should aim for when working with agents.
I really liked your video, and the message, but I just kept thinking that you'd just done LangGraph 'properly'. It has much more capability, more transparency, more enterprise considerations. etc. I'm not sure I agree that you must never under any circumstances loop back. All that means is that you leave open a need to fail a process and with additional context, get it resubmitted. Did I miss something?
What If, the agent pattern is intended to lead to raise up the number of necessary api calls resulting in higher revenue of the llm owner or demanding higher computational power of your local System. Who benefits from which approach? Its worth to think about different approaches how to get efficient Output of llm s
The only thing I understood from this video is not build your project solely depending on agents, build a solid pipeline upto the level where you have proven systems running it.
Just checking, are you giving the agents their own vector database with the business information/logic needed? Im looking into this using something like pinecone, then it can specifically interact with its own information
Not 100% agree but largely clearly on our side! In fact the currents frameworks are not enterprise ready, unpopular opinion you seem to share too ! Imo Orchestration is the real key ;) but agentic systems with stateflow nested pattern connected to external tools/fabric/composio/else can be a part of solution The other key is RAGsystem, uncertainty first focus based framework that adds more complexity but as we think is that is almost mandatory for companies. Greate video ! You should follow closer Microsoft Fabric and Semantic Kernel
existing frameworks are bloated, you have a point there, it's like they only make sense to people who created them. but I'm not sure what exactly is Celery adding here? extra complexity with no extra gain. Defining DAGs is fun, but usually doesn't add much value
I'm a noob with coding so I find crewai a lot easier to create from scratch than your toolkit, but I'd like to change that, what do you recommend to get started on grasping it properly from the start?
From what I see, your key argument about data pipeline flows vs agentic structures does not work in dynamical systems. In linear, simple and pre-determined flows maybe. In chaotic systems? I don't see how.
I've been using crewai to see if it was something I could use in a production setting. The future is definitely going to be more of agentic workflow with agents having the freedom to respond to requests (eg it can handle a variaty of inputs.. not just 1) but I totally agree that the current state is far from the future state. Currently it is very creative and it can do a lot but its a real struggle to get database connections and then do some pandas dataframe actions, and dont even get me started about the excess prompting and inconsistencies. Yeah i know shocking.. but then again i am looking at this from a business intelligence perspective. So for now the pipeline route is the preferred solution.. this however can quickly change when better frameworks come out.
This works, but giving AI to take decisions are supercool, new libraries will emerge check agency-swarm. These works for data driven processes but first creative processes agentic systems make sense
Yeah, but you dont need frameworks for that, agents are just basic methods that call each other or tools. You also create some class for state management. Function calling is also easy. All those frameworks are opinionated and only adds bloat. Unless youre making something with 20 agents, but that would be nonsense...
Agents are good for one-off activities, where you want the agent system to find a sequence of activities that gets the job done. Nice for non-coders or not-knowers. However, for a repetitive process, where you need to rely on the quality of output, you need to control every step and KNOW, that it will deliver a result you can handle in future steps. The issue with LLMs is the uncertainty they introduce, eg. unwanted bias, wrong facts, broken reasoning. Use the LLM only where it shines (understanding and generating text) - you would nit rely on the LLM to be good at math and use other functions instead. This same principle applies for a lot of steps, if you decompose the job into tasks. But you need to understand coding to properly do it (or use an AI to do it for you once and then write the task sequence for you with minimal LLM). And this is not even considering the high costs agent systems produce compared to restricting LLM use where it is beneficial - or understandability, how the result was achieved…
Agents will be built into the models in the future. Just look at o1, and that's the first version of this approach. If and when AGI is achieved, all of this goes out the window.
Hey Dave, I'm working on my undergrad project research, and it's about AI agents, I have a couple of questions if there's a way could you share a way to contact
Isn’t this what LangChain is supporting? LangGraph was exactly created for this to have a somewhat stateful and to a certain extend deterministic process flow with LLMs.
I have been building business processes for 20 years. This is the way!
I've been getting a lot of mixed comments on this video (I know the title is a little triggering, but hey, that's how UA-cam works). I'm glad I'm not the only one that thinks this is the right approach.
No1 said you had to setup an Agent Team as a dynamic randomized flow... you can have agents pass variables down the line... the best part is, it doesn't have to fit neatly into a pydantic model with error handling everywhere... if the previous agent just blobs back a full text instead of a dict - the next agent can still get the info it needs and make it work.
Handling nulls, errors, variations, if/else logic, retries and structured input/output is what agents do away with.
This is 100% the future of AI apps. I do think there should be a framework around this idea. Has anyone seen one?
You have a new sub, Sir, I agree, this is the way not swarm
@@daveebbelaar too bad you don't know what you're talking about. The purpose of AI agents is NOT data pipelines or automation. Agent frameworks are precisely designed for work that is not predictable and can NOT be put into a DAG. If the idea can be put in a data pipeline, then don't use an agent framework at all, its overcomplicated for no reason.
Use an AI agentic framework when dealing with complex, non-linear tasks with unpredictable workflows that require dynamic decision-making and flexibility. Use traditional pipelines when dealing with linear, predictable processes that demand consistency, precise control, and efficiency.
can you please a few examples?
Top!!🎉
Hi, we thought to build a platform for AI agents. So I would like to ask you, do you think there are real use cases for AI agents??? What are the technical challenges that will be there if we are creating thousands of them???
I mostly agree with everything. But there are two kinds of pipelines. The first one is when you have a finite amount of transformations, and the second one is when you don't know all the transformations in advance and need to delegate decision-making (in this case, you need an agentic approach). However, every pipeline can be represented as a finite transformation when you know it. for example classification. and t
that is the key. So, if your pipeline is research-like, then you can't know it in advance; in other cases, you can.
I like this distinction. I build UI heavy pipelines with a lot of human input where there's a ton of variability/decision-making in both the E and the T, and only the L is known in advance.
Good point. In those cases, agentic workflows make perfect sense.
@@daveebbelaar So you admit your title is extremely silly.
I completely agree with your viewpoint. The whole point of these frameworks are to get LLMs to perform such abstract tasks which we have no predefined idea about. There should be data pipelines that satisfy the needs other than that and that is obvious - If you don't need the computation power of LLMs why use them?
This is a great distinction. I am wondering though if we really have use cases for the second type at the Moment? In other words, I have the impression that agentic approaches are mostly used for problems that don’t need agentic approach and can be solved much simpler.
While your critique of agent frameworks is spot on and compelling, it seems there's a misconception about their potential. Your custom system resembles langchain+langgraph, highlighting a need for deeper understanding before dismissing existing frameworks.
I was thinking the same thing or the ability to do “flow engineering” on autogen. I do think it’s a good idea to build your own but it will also be a lot of work so it’s all trade off but it’s more or less the same thing by the end of the video
There are some valid points in this video that agent frameworks are not remedy to all solutions, but I don't agree with lots of the things there:
1. the title is just click-baiting and misleading and I really hate it - there is nothing showing agent framework will fail, just that they are too heavy for some simple tasks like what is demoed here, and is too complicated for some data scientists to fully adopt to
2. the whole video is talking about why a DAG is more straight-forward approach, but did not talk about why agent framework and abstractions are bad. Abstraction always comes at the cost of design complexity, but also with the benefit of things like decoupling, extensibility, etc.
3. it is trying to advocate for building your own DAG framework, while there are already many DAG orchestration frameworks available and I don't see the point building such thing by yourself.
4. I doubt if the author understands those agent frameworks well enough to call them fail. The agent frameworks aim at resolving enterprise level problem at large scale, and it provides a lot of features to make a solution stable, maintainable, and scalable. Making it work on as a PoC is only the first step towards a production ready system. Discarding frameworks and trying to re-invent the wheels on all these pieces are not the right way to go.
agent concept is a misleading concept and we shall come up with more advanced concept than agent . if you hold up with agent concept , the workflow will be really complicated .
I completely agree with your point. I was just about to mention that there is a common misconception or a lack of information regarding the capabilities of agents. LangChain, combined with LangGraph, can indeed perform the same tasks effectively.
Could you specify what you think is the misconception about their potential?
Agent frameworks add inflexibility in a space that is very early in its development and still evolving. As a result, in my view, the potential must be really big to counter the disadvantages.
Or, in other words, what outcomes can the frameworks provide that I cannot create by building my own pipelines or sequences, or however you approach it?
Cyclical/recursive algorithms are needed for many problems which in part, is what agentic frameworks attempt solve. Your sequential processing only paradigm is applicable only to certain problems.
Breaking down the problem is however a good point.
I don’t think there’s really any kind of problem that agent workflows can solve that this can’t.
I agree with tour core message.
But i dont think you've used langchain at the designed level or maybe dont know about langraph?
Its not opinionated and you can (and should) orchestrate the flow however you want, you can make it linear, acyclical (every langraph example), you can decide the flow however you want, deterministally, defined by the LLM output etc.
None of my agents are even driven by any default langchain agents, i have my own prompt, output format, tools etc.
The framework is there to:
1. Standarize the way you interact with the models
2. Have a trackable verifiable analyzable way to build those graphs
On the spot, and most accurate comment!
Agreed. Take a simple airtable, input cell connected to an llm, and an output cell for its response. There you have your first step of an agent. The hours I lost on learning Langchain, Flowise, you name it
Dave, thank you for making this video. I can't tell you many times I thought I was the problem when trying to get AutoGen and CrewAI to do anything beyond the most basic tutorial. The more I look at these frameworks, the more I realize how green this field is.
Everyone is trying to figure this out
Basically what this video is saying is this, "I do not understand the Agentic Framework Flow yet, so I will just critique it in the meantime because I do not understand it"
Kind of LOL - its early and maturing as a more intelligent layered (much needed) framework - agentic with RAG / Lang is still highly compelling
I am curious, what actual results (eg in terms of performance) does the Agentic workflow provide for you that you couldn’t generate with a more flexible approach (eg pipeline, as outlined in the video)
Agents can work back and forth, basically discussing the output and improving if needed. A pipeline is one way.
For Now Agentic NoCode Frameworks are seeing explosive growth (replacing traditional pipeline automations) because of ease of use, I predict this will also be disrupted by LLMs soon with their own fully automated Agent builders that create 100% of the workflow automations including seamless connections to sub agents, tools and other integrations as needed beginning from a simple questionnaire prompt level.
I’m a newbie to all this, so I may be over simplifying how this evolves, but it seems to me AI (insert a leading LLM company here) realizes it’s full potential when they can automatically build their own Agentic managers capable of solving very simple problems to very hard and niche problems in a fully automated way. These Agentic Managers will automatically build the worfkow and add sub agents and help users easily integrate with their systems and tools then easily deploy on their websites and apps to serve their customers, all from a few q&a prompts. A perfect solution for SMBs who don’t have an IT dept
Seems like he understands it though. He argues that for most actual business applications, a pipeline model is easier both to implement and reason about, as well being more in line with what you actually want
Interesting. Working on a CrewAI project atm and I found I was using a DAG approach to tasks because of my experience with Kedro. One task, one transformation, one output and keep working sequentially. In a nutshell, you're describing Kedro's approach and philosophy. Its just not fine-tuned for generative AI use cases yet. What I've found with multi agent apps is that I end up building tools that do all the heavy lifting and the agent Is used to generate a piece of data (like a query string) used in subsequent processing. The challenge is building guardrails to prevent an agent from going off the reservation when something doesn't work. If you give an agent access to a tool as simple as a search tool, if it gets stuck, it could end up calling the tool in a loop and there goes your credits. So we're still having to treat agents like toddlers... would be interesting to see your take on kedro.
Hi! I believe you are Italian like me, and I also already built agents with crewai related to content creation for ig and x. Atm I’m building a lead gen agent linked to another agent that then sends email to those leads
We can share knowledge, it’s not so common find people of our country which are working on CrewAi. If interested reply to this comment 🙏
@@ricasco Hi, could you maybe tell me more about how exactly your lead gen agent works? Which sources does it use to find the leads? Sounds quite interesting :)
This guy definitely worked at McKinsey
Ciao Belli!!
I am working on a project that uses a combination of agents and pipelines. It has agents that each roleplay a specific function of mind, each agent working together to simulate a human mind.
They are divided up according to the ancient Yogic philosophy of mind, Ahamkara for Ego, Manas for processing, Bhuddi for decisions, and Chita as the store house of memories.
Where can we see this work? Sounds fascinating
cool af. Few days ago I was thinking about if someone out there was implementing something similar (although I was mainly thinking about the western philosophy portrait of human mind, i.e. mostly freudian concepts like id, ego and superego and/or even jungian like shadow etc).
Is there any update to the project?any repo that we can learn more about?
I strongly agree with you using the ETL approach. Considering building a pipeline each step or agent flow can be accessed in any order, given the developer more flexible and ease in assigning tasks to agents. Thanks for sharing Dave
langgraph + function calling + langsmith = production
"LangGraph is a way to create these state machines by specifying them as graphs."(c) LangChain
Many points you mentioned make a lot of sense. However, as stated in a previous comment, this approach can lead to numerous transformations in a scenario that might require multiple steps. In other words, you would always have to go through a three-step transformation. And not all tasks need three steps. The issue with fixing it this way is that, first, it can cause delays in response time, and second, you won't fully leverage the best aspect of artificial intelligence, which is its ability to, for example, assess the difficulty level of a question.
My proposal for improving the workflow is to use the first question asked, that is, the input for the first step when you mentioned 'transformation' or 'manager.' I would pass a prompt in this step. And in this prompt, as a response, it would have to classify the difficulty level of the question, with levels ranging from 1 to 5. I would create a logic where, if the difficulty level is between 1 and 3, or between 1 and 2, for example, there would be no need to go through all these steps.
Because there are many trivial questions that wouldn't require so many steps, which is how the human mind works when we ask someone a question. If the question demands more time for reasoning, the person takes time to think. But when, for instance, you ask someone how much 1 plus 1 is, they quickly respond that it's 2, without needing to go through three steps for trivial questions.
So, in the first prompt, I would include a difficulty rating mechanism. You would then establish a programming logic for each of these difficulty levels, allowing an agent with more resources than other agents to handle the reasoning, even based on previous contexts. And in this same step, using the response, you would receive both the difficulty rating of the question, which would be passed to the next step, that being the generation of the response. In this return, there could also be an analysis, based on what is being said, about the quality of the previous response given within the context, assigning it a level of assistiveness.
For example, if you ask someone to activate the email and they don't understand correctly, then, as previously mentioned, the response would receive a rating indicating that it wasn't a good response. This would be added to the agent's context, so when it generates a new response, it would take into account that the previous answer wasn't adequate. This way, with a single step, we would have feedback on the previous situation and the current situation to process.
Wow
Here's something you can help me understand, as an intermediate-level coder learning all of the nuances of AI/ML and their applcations.
You're extolling the value of the directed acyclic graph approach towards data processing pipelines, to avoid sending data to earlier stages.
As a fan of idempotency and functional programming, I _think_ that I somewhat understand where you're coming from in your premise.
But in my studies of models, I'm also seeing a lot of buzz around the differentiation between methodologies of KANs vs MLPs.
My question is this: wouldn't there be some value in using information uncovered later in the pipeline to refine what you're doing earlier on?
For instance, let's say you're entertaining guests, and planning to serve appetizers. A very early step might be purchasing ingredients.
Later on, you realize that not all of the guests show up. If we're just going to keep moving forward, we make more appetizers than are needed.
The alternative: when less guests show up or RSVP, instead of making as many apps as your ingredients/plans dictate, you make less.
Now you have less appetizers and you store or freeze the ingredients you didn't use. You _could_ make them, and freeze the unused portions.
But by sending the information collected later back to an earlier step, you instead have the raw ingredients to use in other recipes instead.
This is a really lousy and forced metaphor, but it's all I could come up with off the top of my head. It just seems like there's value in the concept.
On a different level, isn't this just sort of a form of backpropagation? The ability to reinform earlier calculations with the results of later ones?
BS... clickbait title ...
Glad somebody finally brought that up. 90% of things that folks use agents for can be done with proper flow engineering. For all AI tasks (F1000 prod quality) I use DSPy which allows me to define the flows very nicely, similar to PyTorch. For larger, more complex systems I use Prefect for the workflow, but still DSPy for the individual AI calls. Agents have their place, but most of the time when you think hard about the problem, you don't need them.
You have a solid point about agentic frameworks usually not being the right tool for tangible business applications. It's about automating the repetitive.
I agree. This is the way. I found the same thing. I start from blank and build up without all the different framework that bloat the system.
I have been involved in enterprise software buildout/integration processes in the Fortune 500 (including automated ETL flows for financial reporting) and what you're saying here makes a ton of sense.
This is amazing! Where can I find the deep dive to this? I need it ASAP🤞🏾
You are right - use BPM workflow engine instead and call AI wherever needed.
Yep, it also follows OOP principles -- injecting data into an encapsulated object and getting an output. You could then have objects strung together, each doing their specific job. So a GPT is in a way an object that does a narrow thing and produces an output that could be injected into another GPT.
My problem is having hard time to find girl friends.
You need a bigger pipeline son.
Build an agent to do your tinder for you
@@Mangini037and longer, maybe.
Don't waste your time with them.
Trust me we are done something this - Tinder 😂
I think we will see special models trained for the agent workflows. Right now, they are trained with way too much knowledge for this workflow. Then the latency will also go down. I'm currently wondering why we haven't heard anything about this approach yet.
You said this is a work in progress, and I'm wondering if you've compared the results of traditional Mixture of Agents responses with your pipeline approach for various common use cases.
Good take. All those frameworks are good for getting familiar with the principles but if you want to make a unique specialized product you need to code everything on your own. Probably you won’t need agents for some tasks even.
Thanks for sharing. There’s a tradeoff that a developer needs to balance between reasoning agility and hallucinations minimization that results in how much one wants to constrain the dialog flow. Your case is naturally well suited to be solved by two steps, always the same, ETL-like pipeline. If you test your paradigm with a real chat interaction where a user wants to order at Starbucks you will easily get tired framing it with the ETL-like paradigm. Indeed there’s art in being able to see apparently complex dialogues as more linear pipelines but the overall feeling it that you’re losing much of the flexibility that LLMs can provide you.
We did exactly that approach in a current Projekt and it works great.
The discussion about the data pipeline is accurate, but it cannot be used to prove that a multi-agent system is ineffective or failing. You are still thinking like a software/data engineer instead of an AI engineer. Consider this: when developing any new data pipeline or system, do you ever need an LLM to help? If yes, then there must be a way to integrate the LLM directly as part of the pipeline or system too.
The concept of an agent is very naive due to the current limited capabilities of LLMs. I've seen too many solutions to current problems that are just stacks of LLMs with higher costs, higher latency, and suboptimal solutions.
When thinking about humans as agents, each agent should have specific capabilities (not just role-playing) and specific workflow inside to deal with complex problems at light speed. However, looking at current LLMs, they are slow, lack of specific capabilities and workflows with less reliability .
AutoGen allows groups of agents to have a specific order of execution, so you can have them interacting like in a DAG workflow
Great video, I am not big on UA-cam, but this is the first time I see someone really understanding the current state of the tech.
I totally agree with your premise, good video 🎉
Everything is great. I have built a few tools myself with Instructor. To really automate business processes, however, I see the problem with data protection. In the EU, I can't just put a complete e-mail into an LLM. How do you solve this? It would be great if you could shed more light on the subject of data protection! Thank you very much for your excellent content!
Exactly what I build for my company. Small AI that discovers PII in a text that I want to give our offices for use in their intranet before sending it out to public LLMs. Beforehand I tried Presidio and octopii. Both use regexp. Got bad results.
Bro, you are a genius! Data Pipeline??! wow, eureka!
Great video! I would love to see the source code of the project that you have open, and a walkthrough of how and why you put it together.
I have been in dilemma when and where to use the existing agent-based frameworks vs ETL kind of workflow. I have been given similar work as my thesis as to developing a Multi-Agent RAG System for cross-domain information extraction and retrieval. Could you provide guidance on the following:
Agent Specialization: How can I design specialized agents for different Confluence spaces and internal services (e.g., sick leave, vacation request, IT ticket)?
Coordination: What strategies can I employ for a coordinator agent to manage cross-domain information retrieval effectively?
Domain Adaptation: How can I implement transfer learning techniques to adapt agents to new domains?
Framework Flexibility: What considerations should I keep in mind to create a flexible framework that can accommodate new spaces and internal functions?
Additional Questions:
Are there any existing frameworks or tools that could be adapted for this purpose?
What are some potential challenges and best practices to consider during development and implementation?
I'm eager to explore different approaches and learn more about this topic.
Please suggest!
Super interesting, I have come to the same conclusion about most "agentic" frameworks, the react approach is to inconsistent for production applications.
Have you tried langgraph? It goes into a very similar direction like you datapipline approach.
And together with function calling and structured output it allows you to build super powerful apps.
I get where you are coming from, but...
1) CrewAI has sequential processing. Works pretty similar to traditional pipelines.
2) There are a number of use cases where criss-crossing agents is necessary (i am thinking validation tasks).
Finally, traditional pipelines are nice but I always find myself solving problem that others have already solved..
so it's either copy paste into my custom Pipeline or embrace the OS. I find a lightweight framework like crewai is very useful for many of today''s tasks (in production).
With the ability to include and write custom tools easily - it';s like having a traditional pipeline on steroids.
Hey, looks very reasonable. Have you looked at prefect and its new controlFlow libraty?. It helps to manage this data pipeline pattern for LLM
Can you help me understand on what's the difference between the idea of building agentic applications using LangGraph vs the one you proposed in this video?
You did mention about LangChain style of making agents but LC completely revamped their agentic application building framework using LangGraphs where one can get full control of the behaviour or the agentic workflow with principles from DAGs.
Very nice and detailed video, but whatever you explained as exactly the same as LangGraph. Rather than writing from the ground up its better to use LangGraph to determine the flow between intermediate steps.
This was so wrong
Step 1) understand how the tech works
Step 2) use it to it's best ability
Hasn't this always been the case with new technology for decades?
Awesome content, Keep up the good work
framework in the end is just how you organize your code.
pipelines are good for linear - step a then step b then step c - tasks. using a team of agents is imho meant for nonlinear tasks, where you might need step a then step c and then step b, or even adding or removing steps from the - here it comes - DYNAMICALLY formed pipeline based on the decisions of an (managing role) agent.
using the wrong tool for the wrong task is always an easy way to critique something or someone.
Where would you ever use such a system except for chatbots? Business is deterministic and everything in this World is a business process. 99.9% of the time you wouldn't need dynamically formed pipelines at all.
How would this work if we used a BPM engine instead ?
I found that critical agent feedback is exactly what you need to *constraint* the output. It should shut down all the hallucinated, mal-formed and simply incorrect outputs. Also, tool use is better in agentic architectures - you can have dedicated agents to format tool calls and process their output before it's fed back.
Thank you, very informative. Which pipeline registry tool do you use?
I followed through. Your system looks, great, with content creation processing with your AI generative pipeline. However, I think the point of Agentic, which is not there for sure, is to be able to work in a non-rigid system, the bottom-up approach, but where all systems are communicating together. So it is a build-up on pipelines that you have. Which is amazing by the way 😊 And putting them all together in a system that works autonomously. The idea is to get AI to the point where it will work as a team without instruction. The whole point is why Sam and all the others are building these huge systems now. One thing I want to get my head around is nongenerative AI; is content base. One thing I am seriously delving into now is API endpoints of all kinds with AI LLM model support. With some tasks, they are not required. But with many where data is involved, they are. Hope this makes sense. Not here to put a dent in your wonderful work. You are great at coding and putting the AI infrastructure together. Look forward to following along with you. AI agent workflows are the way forward now.
I have reached the same conclusion, but for other reasons. In fact I dare to say that these frameworks have been slowing down actual innovation. They are very helpful when you are starting end experimenting, but when it comes to go to production they will give you a hard time. Managing token count, tool calling, structured output, entry/exit rules, logging and all the "boring" stuff production needs will be very messy and will force you to rewrite the native modules. My take on data pipeline: it will depend on the task, but I have 2 approaches: (auto managed agent) setting agents as tools and work with tool calling or state machines, to control steps and transitions (what you built from scratch).
Super helpful, ty!
Would like to know more about the design pattern you have used here and why?
ou have really pinpointed the current issues with agents-there’s often a huge gap between aspirations and reality. There are so many agent frameworks out there, but the key to consistent performance still lies in having an effective and stable workflow to guide it, which should essentially be a data flow. Also, could you share the code from the video?
So what you're trying to say is it's simple to build agentic workflows, hence do it yourself and don't use the existing frameworks?
If yes, your video/take would have been much better and persuasive if you dug into the implemetation of the said frameworks while pointing out the cons. Your example is very basic and doesn't even need an agentic process tbh. You can write a script to handle this like you did.
All am asking is a more in-depth comparison and not a one-sided take.
Agree on the idea not to use frameworks and build custom, however, the data pipeline / sequential DAG based approach will not achieve the fluidity that gen ai promises.
One very good usecase for agents is the ability to take a decision on using the tools which they have at their disposal. A very lightweight less bloated framework which can do this (Basically function calling but with consciousness) will win the race. I am thinking of a design pattern instead of a framework will work.
This is coming from my experience of putting crewai in prod and see it fail miserably at times
But why would you even need that ability except for when building chatbots? Your business processes are not dynamic. They are pretty static. What is dynamic is the data being fed into the process. If your business process is itself dynamic then your problem is not AI but your business process itself.
@@Shri you dont really need agents for such tasks. Normal bots would do the same thing (static requirements and needs). Think of an agent which can generate ppts for you project, searching internet, compiling from various sources and writing the ppt content. This is what we should aim for when working with agents.
wish i had seen this before beating my head against dify, flowise, n8n... 100% spot on
I really liked your video, and the message, but I just kept thinking that you'd just done LangGraph 'properly'. It has much more capability, more transparency, more enterprise considerations. etc. I'm not sure I agree that you must never under any circumstances loop back. All that means is that you leave open a need to fail a process and with additional context, get it resubmitted. Did I miss something?
@13:33 Please create an in-depth video on these concepts using the example that you are showing in this video !
Yeah, that’s true…!
(that’s why I’m working on a open source execution environment for agents)
What If, the agent pattern is intended to lead to raise up the number of necessary api calls resulting in higher revenue of the llm owner or demanding higher computational power of your local System.
Who benefits from which approach?
Its worth to think about different approaches how to get efficient Output of llm s
hey could you please share the video about the instructor library that you talked about in this video
The only thing I understood from this video is not build your project solely depending on agents, build a solid pipeline upto the level where you have proven systems running it.
What about acyclic workflows? For example a crawler which is trying to find a login page for a website?
It’s simpler for you because of your ability but for 95% of people having a framework is a better option
No its not, basic python is enough for any kind of agentic flow, these frameworks only complicate things.
I would LOVE to see your design pattern in depth
Just checking, are you giving the agents their own vector database with the business information/logic needed? Im looking into this using something like pinecone, then it can specifically interact with its own information
Not 100% agree but largely clearly on our side!
In fact the currents frameworks are not enterprise ready, unpopular opinion you seem to share too !
Imo Orchestration is the real key ;)
but agentic systems with stateflow nested pattern connected to external tools/fabric/composio/else can be a part of solution
The other key is RAGsystem, uncertainty first focus based framework that adds more complexity but as we think is that is almost mandatory for companies.
Greate video !
You should follow closer Microsoft Fabric and Semantic Kernel
Think about AI Assistant that replaces chat-bots in sales tunnel. IMHO the best use case to cover it with RAG and agentic approach
Thanks for the video. What kind of whiteboard tool do you use?
its figma
existing frameworks are bloated, you have a point there, it's like they only make sense to people who created them. but I'm not sure what exactly is Celery adding here? extra complexity with no extra gain. Defining DAGs is fun, but usually doesn't add much value
great work. please publish the next tutorial. is there a github for the code?
I'm a noob with coding so I find crewai a lot easier to create from scratch than your toolkit, but I'd like to change that, what do you recommend to get started on grasping it properly from the start?
state machines, state machines, state machines...
does openAis structured output release change your opinion ?
very interesting i was applying this kind of tasks in my company too like email reading attached files processing etc. is code avaible? thanks alot
From what I see, your key argument about data pipeline flows vs agentic structures does not work in dynamical systems. In linear, simple and pre-determined flows maybe. In chaotic systems? I don't see how.
is there a way that the user gets to know in an agent framework, which llm is solving his task, so that he gets consistant and repeatable results?
I've been using crewai to see if it was something I could use in a production setting. The future is definitely going to be more of agentic workflow with agents having the freedom to respond to requests (eg it can handle a variaty of inputs.. not just 1) but I totally agree that the current state is far from the future state. Currently it is very creative and it can do a lot but its a real struggle to get database connections and then do some pandas dataframe actions, and dont even get me started about the excess prompting and inconsistencies. Yeah i know shocking.. but then again i am looking at this from a business intelligence perspective. So for now the pipeline route is the preferred solution.. this however can quickly change when better frameworks come out.
Good old Programming: Imperative Programming
"AI" programming: Declarative programming
Have you seen Agent-Zero? Very much a clearer way to do some tasks.
This works, but giving AI to take decisions are supercool, new libraries will emerge check agency-swarm.
These works for data driven processes but first creative processes agentic systems make sense
Yeah, but you dont need frameworks for that, agents are just basic methods that call each other or tools. You also create some class for state management. Function calling is also easy. All those frameworks are opinionated and only adds bloat.
Unless youre making something with 20 agents, but that would be nonsense...
Agents are good for one-off activities, where you want the agent system to find a sequence of activities that gets the job done. Nice for non-coders or not-knowers.
However, for a repetitive process, where you need to rely on the quality of output, you need to control every step and KNOW, that it will deliver a result you can handle in future steps. The issue with LLMs is the uncertainty they introduce, eg. unwanted bias, wrong facts, broken reasoning.
Use the LLM only where it shines (understanding and generating text) - you would nit rely on the LLM to be good at math and use other functions instead. This same principle applies for a lot of steps, if you decompose the job into tasks. But you need to understand coding to properly do it (or use an AI to do it for you once and then write the task sequence for you with minimal LLM).
And this is not even considering the high costs agent systems produce compared to restricting LLM use where it is beneficial - or understandability, how the result was achieved…
Great contribution
how to build production level ai applications??
is only function calling enough??
Agents will be built into the models in the future. Just look at o1, and that's the first version of this approach. If and when AGI is achieved, all of this goes out the window.
Hey Dave, I'm working on my undergrad project research, and it's about AI agents, I have a couple of questions if there's a way could you share a way to contact
Can you share the project shown in the video?
Excellent !
Hey, can u share the template if possible?
very well said!
Isn’t this what LangChain is supporting? LangGraph was exactly created for this to have a somewhat stateful and to a certain extend deterministic process flow with LLMs.
I can’t agree with this more strongly
One question : how to manage memory within the data pipeline system ?
Langgraph
This is such a good video - I will dive deeper into this at a later point to learn about your findings
Have you looked at VRSEN’s Agent-Swarm? Sounds like he avoids many of the pitfalls you describe here…