A Prompt Engineering Trick for Building "High-level" AI Agents

Поділитися
Вставка
  • Опубліковано 14 січ 2025

КОМЕНТАРІ • 36

  • @wadejohnson4542
    @wadejohnson4542 6 місяців тому +8

    Intelligent. Informative. Another addition to an already impressive body of work. Well done, young man. I look forward to your videos.

  • @joefajen
    @joefajen 6 місяців тому +5

    Thanks!

  • @nedkelly3610
    @nedkelly3610 6 місяців тому +13

    Excellent Agent system, please do build on this by adding RAG tool for both local docs, and for fetched internet docs.

  • @brucehe9517
    @brucehe9517 5 місяців тому

    Thank you! I really enjoy watching your videos and have learned a lot from them. Please keep posting more videos related to LangGraph and agentic workflows.

  • @leonwinkel6084
    @leonwinkel6084 6 місяців тому

    Awesome, thanks for sharing! For the router, what I do often is to ask “what can be done better here is the question and the response”. Often it gives good suggestions that candidates then be processed in the next loop. Or asking “ on a scale of 0-10 how would you rate this answer” and then ask “what is needed to make it a 10?” It’s quite cool what comes out of it.

  • @MuhanadAbulHusn
    @MuhanadAbulHusn 6 місяців тому

    Thanks a lot, that was amazing in explaining and implementing.

  • @BradleyKieser
    @BradleyKieser 5 місяців тому

    Great thinking and design.

  • @sabitareddy3359
    @sabitareddy3359 5 місяців тому

    superb presentation. great voice.

  • @shokouhmostofi2786
    @shokouhmostofi2786 6 місяців тому

    Thank you for the great content!

  • @joefajen
    @joefajen 6 місяців тому

    I thoroughly enjoyed this video! I'd be very interested in seeing how you might incorporate a RAG aspect into this meta-prompting approach. My use case concerns technical writing work, so I am exploring ways to do text analysis and document generation in stages in relation to a specific body of text content.

    • @Data-Centric
      @Data-Centric  6 місяців тому +2

      Great suggestion, I'll see what I can do here.

  • @2008tmp
    @2008tmp 6 місяців тому +2

    Do you have a link to the github repo? looks like you linked to a different project in the description. Great presentation!

    • @Data-Centric
      @Data-Centric  6 місяців тому +2

      Hey, thanks for letting me know. I've linked the correct repo now.

    • @2008tmp
      @2008tmp 6 місяців тому +1

      @@Data-Centric Thank you!

  • @HassanAllaham
    @HassanAllaham 6 місяців тому +1

    Thank you for the very good content you provide. This is one of the best videos I have ever seen.
    You mentioned 3 important points I would like to comment on:
    1- LLM makers are doing a good job but in a very wrong direction. They tend to try to produce a common LLM that can do multi-tasks in multiple domains. This produces a weak LLM (or it may be suitable to call it "stupid LLM). For example, they train LLMs on multiple coding languages (c, c++, PHP, python, javascript, ... etc.). I am very sure this will not produce what is called "expert" (i.e. will not generate a real expert LLM). I wonder what the "expert" level we would have if it is trained only on coding and only in one programming language.. lets say python or javascript.. I believe we will have a real super LLM for such a domain and such programming language even if it is in the low range of parameters (maybe 3-7B).
    2- I believe that breaking the job of the master Meta agent itself into more granular and simpler logical jobs would make it better and more capable of using smaller LLM. i.e. creating an agent which only responsible for breaking the task into smaller ones in the form of a list (array), then programmatically looping through that array giving and delivering each element (small task) to a router agent that is only responsible for routing the task to the suitable excuter agent. The executor agent should only be responsible about only one simple task and if it is tooled agent should use only one tool.
    3- The history of chat between agents should be held by the executor agent, not the master. Using this way, the master will receive the response from the executor and deal with it only with a short amount of history (This way the Lost-In-The-middle probably will not happen). We may join all the histories just for logging and debugging and not as a chat history for any agent.
    Answering your question: Yes I would like to see more from you whatever the development you may do on this interesting workflow (i.e. the RAG as memory).
    By the way, I believe you have to re-try Ollama on some of your older experiments using OLLAMA_NOHISTORY=1 as env var.. I made some trials using your code from older videos and I got a better result when using OLLAMA_NOHISTORY=1. Also, I would like to know why you do not use Google Colab! (In addition to being able to use closed-source big LLMs, you can install Ollama and try many open-source LLMs on the same code base at the same video, + and this will not put your device under heavy load, since you are using other heavy programs for making these wonderful videos).
    I wonder whether we might be able, or not, to make an agent (or agents) that might be called "agent-generator agent" (or better: tool_selector_agent to select -if any- needed tool from pre-made tools + code_interpreter_agent which might be used to create the need tool if not available in the premade list + system_prompt_generator_agent to create the needed system prompt with the tool description included). This agent or group of agents is/are able to create a new agent with a good name, good system prompt and giving it the suitable tool to do the simple task it is created to do .
    Again thank you for the very good content. 🌹🌹🌹

  • @PerfectlyNormalBeast
    @PerfectlyNormalBeast 6 місяців тому

    That is a powerful concept
    I'm only a few minutes in, so maybe it's covered later. It's almost like a contracting company - I need this. Ok, we've seen that before, here's a proven worker ||or|| sure, we'll build a worker for that
    So much of day to day interactions are interface based. Having agents able to define interface, then build agents to work within is such a great idea

  • @syedibrahimkhalil786
    @syedibrahimkhalil786 5 місяців тому

    Subbed! Thanks for the great insights. Could you please mention some use cases regarding smart city application, on which such "agents" scenario would be fruitful. Looking forward to use LLM's towards automation of crowdsensed data collection and taking out meaningful results. Would really appreciate your input.

  • @RBBannon1
    @RBBannon1 6 місяців тому

    Well done. Thank you!

  • @sirishkumar-m5z
    @sirishkumar-m5z 5 місяців тому

    Unlock the potential of high-level AI agents with innovative prompt engineering techniques. SmythOS can take your AI projects to the next level with its advanced capabilities and customization.

  • @vancuff
    @vancuff 5 місяців тому

    Is it possible to train an llama 3.1 model on this meta prompting technique?

  • @mohanmadhesiya3116
    @mohanmadhesiya3116 6 місяців тому +1

    make a video on meta prompting with sqllite database where it can answer user query based on the database and internet(web search)

  • @shiyiyuan6318
    @shiyiyuan6318 6 місяців тому +1

    If I remember correctly, in your previous video you did not recommend using AI frameworks, but in this video you use langgraph as an example, can you tell me why?

    • @Data-Centric
      @Data-Centric  6 місяців тому +3

      I had a video where I discussed building custom vs using frameworks like Crew AI and AutoGen. My main gripe with those frameworks is that they have these hidden prompts in the repo to orchestrate your workflows. LangGraph is different, it's more customizable providing just the minimum tools (essentially the graph and state objects) to assist you with building workflows.

    • @free_thinker4958
      @free_thinker4958 6 місяців тому

      He was talking about crewai and autogen

  • @NirFeinstein
    @NirFeinstein 6 місяців тому

    Wow amazing idea and execution, everything is so thoughtful.
    Can I run it to build and improve itself to build a very capable system? 😬😬😬

  • @CUCGC
    @CUCGC 6 місяців тому

    I usually follow along with the code. Could not this time wrong github repo. I like the RAG with an embed model.

    • @Data-Centric
      @Data-Centric  6 місяців тому +2

      Hey, thanks for letting me know. I've linked the correct repo now.

  • @j_Techy
    @j_Techy 6 місяців тому

    What are some ways I can make money with AI agents or use it in a business model?

    • @HassanAllaham
      @HassanAllaham 5 місяців тому

      Go chat with some "strong" LLMs like gpt4, ... ask them your question.
      You may add "explain your reasoning in detail" or "think step by step" to some of your prompts so you will understand, evaluate, and maybe correct the LLM "thinking" way by re-directing it, using some prompt modification, to give you the answers you need.
      Keep in mind that:
      1- LLM results are the highest "probability" tokens joined together depending on the datasets used to train the LLM and the number of layers and hyperparameters used to calculate this probability. en.wikipedia.org/wiki/Probability
      2- till now, as a result of hallucinations, you can not trust the LLM to do critical dangerous tasks like banking tasks or hospital tasks.
      3- LLMs do not execute the functions by themselves, they just choose the suitable functions with the needed parameters and your app (program) executes these functions.
      4- to give the agent the ability to do jobs that need muscles you need to use the results of the function it chooses or the results of code it builds to control some kind of a "stupid" machine to make it appear like a very "clever" productive machine. 😎 After starting your AI-based organization remember that you have to pay me a share of its profits since I gave you the best way to do it. 🤑😁

  • @i_forget
    @i_forget 6 місяців тому

    Ive created a novel promising prompting framework. It may be good for the meta agent to agent prompting. Lmk if you would like to learn about it.

  • @freeideas
    @freeideas 6 місяців тому +2

    I don't understand all the focus on agent swarms. I don't see how agents can do anything difficult besides generation of content. Can they make a real game that is too large to fit into a single script? No, because they can't play the game to see whether it works. Can they make a real command-line program? Not really, because they are mostly unable to realize when their approach is wrong and they need to start over with a different plan. I am complaining here, because I hope someone can tell me I am wrong. I am trying to build my own Ai agent that uses trial-n-error (like a human does) to accomplish things that it doesn't really know how to do until it tries (like most of my own projects). I would love for someone to tell me that this is already done so I don't have to build it.

    • @J3R3MI6
      @J3R3MI6 6 місяців тому

      They can easily play the game

    • @freeideas
      @freeideas 6 місяців тому

      @@J3R3MI6 Really? They can see the screen and push the arrow keys? Wow that would be big news for me. Can you paste a link to something showing me how to make an LLM operate the UI?

    • @airobsmith
      @airobsmith 5 місяців тому +1

      @@freeideas fundamentally LLMs are not good at game playing because they lack any complex planning ability where results of 1 move can bring multiple and unknown responses and the LLM has to wait for the response. Real world planning is a key part of current research. See Data Centric video "AI Agents: Why They're Not as Intelligent as You Think" for some insight to this problem