Agents are basicly Prompt Chaining, where each prompt has access to real world connectivity and include it in the prompt. It can target different llm to generate its basic needs based on the capabilities given to that prompt, pack it into a box, and call it an agent.
Pretty good summary. Now just add some basic logic to the framework to handle all the errors that the LLM produces when trying to connect to the real world and you have created a descently reliable system
Hi Max! Great video. Very informative. I had a question, do all foundation models allow for chain of thought and ReAct prompting? Is this something new that developed in the granite-3-8b instruct model?
Thanks :-) This works with all models, even fairly old ones. Actually newer models (like granite-3-8b) have a lot chain-of-thought examples in their training data so they tend to do that automatically without you even having to tell them
Agents are basicly Prompt Chaining, where each prompt has access to real world connectivity and include it in the prompt. It can target different llm to generate its basic needs based on the capabilities given to that prompt, pack it into a box, and call it an agent.
Pretty good summary. Now just add some basic logic to the framework to handle all the errors that the LLM produces when trying to connect to the real world and you have created a descently reliable system
Hi Max! Great video. Very informative. I had a question, do all foundation models allow for chain of thought and ReAct prompting? Is this something new that developed in the granite-3-8b instruct model?
Thanks :-) This works with all models, even fairly old ones. Actually newer models (like granite-3-8b) have a lot chain-of-thought examples in their training data so they tend to do that automatically without you even having to tell them
I don't agree, we can easily build simple agents with chatpt. Don't need yet another costly tool to create simple agents.