Jim, you're the GOAT 💪 always astounded by your n8n knowledge with your examples of how to stretch capabilities further - this dynamic prompting example is so helpful.
This is great, I recently did something like this with one of my agentic workflows... used a data validation in sheets to adjust a system prompt for the agent. end user can choose different values in the drop down, or can even add their own to adjust the agents behavior... very cool to see others on the same wavelength!
Love to see the small LLM being used, people are pushing for always more, 400b -> 600b ... but classification doesn't required a 600b model to run! thank you Max! Jim the pattern is FIRE!
This was really clever. Props to the developer! I’ve extracted prompts from workflows to airtables for easier updates and reuse, but this is a whole new level! Appreciate it.
I'm an LLM engineer with fine tuning expertise. I reached out to your support and they were like, "Thanks but no thanks." No worries either way. I don't have a burning desire to show how it works. Just FYI.
Sorry about the stupid question. But! Should I split this into two separate workflows in n8n? One to kick the webhook start and keep it running (maybe trigger it every 6 days?) and another one for the actual table updating workflow? I'm asking because in the current setup every time I hit test it appears that a new airtable hook is being created, which: once has no point, and second, it's reaching my plan's limits (on the free plan we have a maximum of 10 webhooks, anyhow I upgraded to a paid plan, but still don't know what the new limit will be). Otherwise this workflow is genius. Love it 🎉
Short answer is, I would say it's a "problem solution" discovery call - agencies do these all the time with their clients. "Google Design Sprint" might be worth a quick read, I've used that format before.
I have to admit, Jim‘s Solution is actually really smart! Did not consider using the column description through the API before. Would be cool, if the AI nodes could be built more modularly, by passing a JSON structure or similar for more efficiency.
Jim, you're the GOAT 💪 always astounded by your n8n knowledge with your examples of how to stretch capabilities further - this dynamic prompting example is so helpful.
This is great, I recently did something like this with one of my agentic workflows... used a data validation in sheets to adjust a system prompt for the agent. end user can choose different values in the drop down, or can even add their own to adjust the agents behavior... very cool to see others on the same wavelength!
I like the direction you going with classification and using open source models, nice job dude.
Love to see the small LLM being used, people are pushing for always more, 400b -> 600b ... but classification doesn't required a 600b model to run! thank you Max! Jim the pattern is FIRE!
Exactly! It's like lighting a candle with a flamethrower. It definitely works, just uses a ton more carbon.
@@theflowgrammer I will keep the flamethrower in mind when I'm next ask why using small model! 😆
This was really clever. Props to the developer! I’ve extracted prompts from workflows to airtables for easier updates and reuse, but this is a whole new level! Appreciate it.
Great job, also Jim. That was mind blowing. More please.
Totally! Really excited to see what he cooks up next.
thanks Jim for really cool & practical tips !
It is really great to see these longer form videos with the detailed workflow explanations.
Great to know Keith, cheers! Got a bunch more coming
Me gustan estos videos!!!!🎉
This soooo smart!
I know right? I'm going to be checking all my tools now for metadata properties I can employ in this way :)
These are the seven words you solicited.
I'm an LLM engineer with fine tuning expertise. I reached out to your support and they were like, "Thanks but no thanks." No worries either way. I don't have a burning desire to show how it works. Just FYI.
Awesome Content! Thank you
Sorry about the stupid question. But! Should I split this into two separate workflows in n8n? One to kick the webhook start and keep it running (maybe trigger it every 6 days?) and another one for the actual table updating workflow? I'm asking because in the current setup every time I hit test it appears that a new airtable hook is being created, which: once has no point, and second, it's reaching my plan's limits (on the free plan we have a maximum of 10 webhooks, anyhow I upgraded to a paid plan, but still don't know what the new limit will be). Otherwise this workflow is genius. Love it 🎉
How do I run a discovery call
Short answer is, I would say it's a "problem solution" discovery call - agencies do these all the time with their clients. "Google Design Sprint" might be worth a quick read, I've used that format before.
Thank you. I'll check it out
I have to admit, Jim‘s Solution is actually really smart! Did not consider using the column description through the API before.
Would be cool, if the AI nodes could be built more modularly, by passing a JSON structure or similar for more efficiency.