This is definitely the best explanation of AI agents ever. I'm a subscriber from now onwards. What this lesson also shows are 1. You can create agents without frameworks if you understand what they are and what they're supposed to do. 2. Agent creation isn't all that hard. It may involve writing more lines of code but everything you write will be under your control. I'm curious as to how you can make two agents talk to each other. If we can do this, then I'm ditching all frameworks.
You make the best llm dev content on youtube - always a clear vision and clean code. I'd be interested to see your take on running llm code, say giving an ai a python sandbox to write code to, or even make new tools in the format from this video.
Building AI agents from scratch involves designing algorithms that can learn and make decisions. Start with defining the problem, select appropriate machine learning models, and train them using relevant data. Simplify the process by breaking it into manageable steps and using frameworks that streamline development.
Awesome info on custom agents, can you crate a vid that dives into building tools/functions? I watch a TON of your videos, appreciate the work and info you put in! you have skills and knowledge my friend...
Thank you for this well-explained video. Which would you recommend for creating a sophisticated, saleable, production-ready agent system (> 30 nodes) with cycling and branching capabilities: Haystack, LangGraph, or a custom framework built from scratch, and why? Also, can you please make a video about tips for building AI agents in production? Last question, can LangGraph and LlamaIndex be combined and does it make sense to do so?
Awesome newsletter automation tips! Alternative tools like SmythOS offer advanced AI models to streamline your content strategy. #AIContent #Automation
This is a great example! Thank you. Only thing i would miss is the agent ability to reason with itself after the tool responds with a calculation or string reversal.
Great thanks a lot!! I would also love to see how agents are looping through problems and iterate on possible solutions and reflect until they decide the current solution in good and only then returns it
Awesome thanks, i hope maybe you can do a part 2 where we can see agents working together in a flow or process or somerhthing where we can see their value over normal code
I love the detail in this video and will be checking out your other ones! I suggest upgrading your microphone as this is currently the only thing taking away from the viewing experience
would like to see the following - given a research agent(s) with scraping or search capabilities (could be any task really), a manager agent who analyzes research agent returns, accepts the response, the response needs further refinement or the response elicits a new question with the manager who then, with case 2 or 3, sends the research agent out again with a new task.
Excelent work, im following this project, I encountered that with 7b models its hard to make it work properly, i will work on the optimization of it, looking for tool calling optimization
Very well done. Great delivery, excellent content on AI Agents. Look forward to learning more from you. For example, how do you think AI enterprise workflows might be developed through combination of the AI Agent approach you have outlined integrated with BPM (biz process mgmt) tools and RPA?
Thank you! Regarding your suggestion, this is quite difficult to do without sounding general and unhelpful. For obvious reasons, one can't share confidential client work on UA-cam so the information becomes more like a consulting dec you could easily pick up from McKinsey or Gartner for free.
this is basically what I've been doing ,interesting we have come to similar solutions , i never thought of specifying a json format back , i had been converting it afterwards ,good stuff
Do you have any personal recommendations for looking for particular fine-tuned models on huggingface? I get some of the lingo, like what "Hermes" means, etc. but I'm curious if you've noticed if particular data-sets to look for. Or even stuff like if you've noticed if there's any noticable difference For example: Between Unsloth and DPO tuned models, any inkling if anything like that might be more effective in general in terms of finding a model that seems to perform really well as an agent? I know this is a random question and is kinda broad on the face of it. So to clarify my intentions, I'm really I'm just trying to spark any novel thinking or observations that might prove valuable. Either way, thanks for your time man! These videos are awesome
Thanks for watching. I'm currently combing through the latest research papers on agents, haven't covered finetuning for agentic workflows yet. However, if there is anything interesting that appears in the literature I will do a video on it!
All these videos are great and good but it never works on my machine can do a video where you show a full beginner easy installation guide or ide , python environment etc?
I appreciate your efforts in trying to get everything up and running. Have you had a chance to go through the instructions in the README for the GitHub projects? Python development often requires persistence with debugging, and there are plenty of resources available online to help with these challenges. I mention this not to dismiss your request, but because my time is limited, and I focus on AI and its applications on this channel. Unfortunately, I cannot dedicate videos to beginner tutorials on setting up your environment, IDE, etc. Thank you for your understanding!
I run my Ollama stuff on a container, and I have a ip like 192.168.1.1:8087 for it. How would I approach changing a local ollama model to use this port instead?
I am a bit lost when it come to understand where some of those actions are happening, how does the LLM read the description ? where does it transform the query to use the tool ? Does it use the response of the tool in a prompt or not ?
Depending of how competent your model is it will understand what it do or not I had an experiment where the LLM (gpt-4o) was able to use a tool for a case I did not anticipate in my doc string or name of the tool/function because it was able identify that some script I have inside could help it to respond to the query It did not work with smaller model
There is a minor typo in ollama_model.py on line 38. It should be '"format":"json",' instead of '"format" "json"' (a colon and a comma are missing). Additionally, I had to uncomment line 56 and return response_dict instead of just response, as it was a string.
This is great John, thanks for this video. I would love if you could elaborate more on this agent architecture vs the one you built in ua-cam.com/video/CV1YgIWepoI/v-deo.html - when would you use which architecture and why?
hey what is te self.model_endpoint here im not getting itcan you help me find it?or how can i find it for other platforms like together ai and groq points as well?
when the student is ready, the master appears! Thankyou.
"Not only the thirsty seek the water, the water as well seeks the thirsty." -Rumi
This is such an amazing service to the community. You deserve to be recognized. Thank you.
I'm crying really, I was just lost in this project, but you made it so tangible and understandable. Thank you. Thank you from the bottom of my heart.
This is truly the best agentic explanation across the internet without using frameworks. Many thanks!
finally a clear, structured explanation on how agents work behind the hood. great respect man!
This is definitely the best explanation of AI agents ever. I'm a subscriber from now onwards.
What this lesson also shows are
1. You can create agents without frameworks if you understand what they are and what they're supposed to do.
2. Agent creation isn't all that hard. It may involve writing more lines of code but everything you write will be under your control.
I'm curious as to how you can make two agents talk to each other. If we can do this, then I'm ditching all frameworks.
This was the best video I ever see on AI in months. Very well explained, great examples and good explanation on the subject! Just amazing!
You make the best llm dev content on youtube - always a clear vision and clean code. I'd be interested to see your take on running llm code, say giving an ai a python sandbox to write code to, or even make new tools in the format from this video.
Thank you! I'll see what I can do on this.
I definitly want to see this
One of the most structured and throughly explained code-along thank you
Awesome work! From scratch, helps us understand. As Feynman said: what I can’t create, I don’t understand.
Just used your Github repo & got this to work. Thanks for the detailed lessons.
That was truly incredible. I have never been so motivated before. Thank you so much for this
THANK YOU! I've been looking for one like you that knows the tech well enough to explain it! Thank you!
Building AI agents from scratch involves designing algorithms that can learn and make decisions. Start with defining the problem, select appropriate machine learning models, and train them using relevant data. Simplify the process by breaking it into manageable steps and using frameworks that streamline development.
Awesome info on custom agents, can you crate a vid that dives into building tools/functions? I watch a TON of your videos, appreciate the work and info you put in! you have skills and knowledge my friend...
This perfectly demonstrated what exactly it works underneath.Thanks
keep it up bro week by week ! - It was a very good video !
Works fine !! 100 percent !
Thank you for this well-explained video.
Which would you recommend for creating a sophisticated, saleable, production-ready agent system (> 30 nodes) with cycling and branching capabilities: Haystack, LangGraph, or a custom framework built from scratch, and why? Also, can you please make a video about tips for building AI agents in production? Last question, can LangGraph and LlamaIndex be combined and does it make sense to do so?
Brilliant Question!
You got any answer?
Excellent video. Thanks for the detail and especially your reasoning, I really appreciate that.
Awesome newsletter automation tips! Alternative tools like SmythOS offer advanced AI models to streamline your content strategy. #AIContent #Automation
Thanks man, we would like you to dedicate a future video of a self improving agent with memory + deployment 🎉🙏
I'll be creating a more sophisticated agent with vector store for long term memory.
@@Data-Centric looking forward!
fantastic walk through & video production. Thank you.
This is a great example! Thank you. Only thing i would miss is the agent ability to reason with itself after the tool responds with a calculation or string reversal.
Great thanks a lot!! I would also love to see how agents are looping through problems and iterate on possible solutions and reflect until they decide the current solution in good and only then returns it
Great example without using existing frameworks
Awesome thanks, i hope maybe you can do a part 2 where we can see agents working together in a flow or process or somerhthing where we can see their value over normal code
Hi @Data Centric. Do you plan to add a video on how you can make two (or more) agents talk to each other to fulfill a task from scratch? :)
I'm learning how to develop AI Agents from you.. new subscriber 🎉
I love the detail in this video and will be checking out your other ones! I suggest upgrading your microphone as this is currently the only thing taking away from the viewing experience
You have the teacher talent. Thank you!
Hey man, excellence in all its meaning, thanks for your work and knowledge and time!
I was waiting for this, really cool and thanks for making this < 3
would like to see the following - given a research agent(s) with scraping or search capabilities (could be any task really), a manager agent who analyzes research agent returns, accepts the response, the response needs further refinement or the response elicits a new question with the manager who then, with case 2 or 3, sends the research agent out again with a new task.
Another great video! Awesome work!
Thank you so much for creating this video. 😁
the accent and articulation are awesome
Excelent work, im following this project, I encountered that with 7b models its hard to make it work properly, i will work on the optimization of it, looking for tool calling optimization
You, sir, are a legend!
Great explanation. Thank you very much!
Can you list examples of tools and more powerful tools we can give to our agents?
Tools can be anything you want as long as you can program them as functions. Web Search, Data Visualization, Scheduling Events etc.
Very well done. Great delivery, excellent content on AI Agents. Look forward to learning more from you. For example, how do you think AI enterprise workflows might be developed through combination of the AI Agent approach you have outlined integrated with BPM (biz process mgmt) tools and RPA?
Thank you! Regarding your suggestion, this is quite difficult to do without sounding general and unhelpful. For obvious reasons, one can't share confidential client work on UA-cam so the information becomes more like a consulting dec you could easily pick up from McKinsey or Gartner for free.
I'm not suggesting that Mckinsey or Gartner do not do great work btw!
this is basically what I've been doing ,interesting we have come to similar solutions , i never thought of specifying a json format back , i had been converting it afterwards ,good stuff
Good explanation. Thanks a million
Do you have any personal recommendations for looking for particular fine-tuned models on huggingface?
I get some of the lingo, like what "Hermes" means, etc. but I'm curious if you've noticed if particular data-sets to look for. Or even stuff like if you've noticed if there's any noticable difference
For example: Between Unsloth and DPO tuned models, any inkling if anything like that might be more effective in general in terms of finding a model that seems to perform really well as an agent?
I know this is a random question and is kinda broad on the face of it. So to clarify my intentions, I'm really I'm just trying to spark any novel thinking or observations that might prove valuable.
Either way, thanks for your time man! These videos are awesome
Thanks for watching. I'm currently combing through the latest research papers on agents, haven't covered finetuning for agentic workflows yet. However, if there is anything interesting that appears in the literature I will do a video on it!
All these videos are great and good but it never works on my machine can do a video where you show a full beginner easy installation guide or ide , python environment etc?
Have you tried to do it with the help of ai? 😊
I appreciate your efforts in trying to get everything up and running. Have you had a chance to go through the instructions in the README for the GitHub projects? Python development often requires persistence with debugging, and there are plenty of resources available online to help with these challenges. I mention this not to dismiss your request, but because my time is limited, and I focus on AI and its applications on this channel. Unfortunately, I cannot dedicate videos to beginner tutorials on setting up your environment, IDE, etc.
Thank you for your understanding!
I run my Ollama stuff on a container, and I have a ip like 192.168.1.1:8087 for it. How would I approach changing a local ollama model to use this port instead?
I know you used very basic example to show proof of concept. But what are some practical things that one could do with these agents/tools?
I am a bit lost when it come to understand where some of those actions are happening, how does the LLM read the description ? where does it transform the query to use the tool ? Does it use the response of the tool in a prompt or not ?
This guy is awesome.
Thank you sir
Thank you for this
Thanks 💯
Bro u r amazing 👏👏👏
I have 1 question what will be the behaviour if tool calculator is named as some garbage but has proper doc string
Depending of how competent your model is it will understand what it do or not
I had an experiment where the LLM (gpt-4o) was able to use a tool for a case I did not anticipate in my doc string or name of the tool/function because it was able identify that some script I have inside could help it to respond to the query
It did not work with smaller model
Use langchain bro. So much better than rawdogging with pure Python
There is a minor typo in ollama_model.py on line 38. It should be '"format":"json",' instead of '"format" "json"' (a colon and a comma are missing). Additionally, I had to uncomment line 56 and return response_dict instead of just response, as it was a string.
Great spot, thank you. I've updated the code with the fix now!
Similar to function calling
this is from numerous imports,
rather than from scratch
Great
This is great John, thanks for this video. I would love if you could elaborate more on this agent architecture vs the one you built in ua-cam.com/video/CV1YgIWepoI/v-deo.html - when would you use which architecture and why?
6:39 ???
why is it necessary to install anaconda?
I'll answer my own question - no it is not necessary. It works great. thanks for the fantastic video and for providing us the code.
AI agents dont really work that well yet.
It's really the llm end that's non deterministic as getting what we want out of it gets tricky
This video is ai generated
hey what is te self.model_endpoint here im not getting itcan you help me find it?or how can i find it for other platforms like together ai and groq points as well?
Thanks man, we would like you to dedicate a future video of a self improving agent with memory + deployment 🎉🙏