I've been away for a while, and I'm thrilled to return and see the amazing progress and new features that have been implemented. It all makes perfect sense, and I’m excited to try this out. Thank you for your hard work!
Ive been away for a while, not by choice but excited to catch up on my favorite channel, this looks great so far. You are doing exactlt what i wanted to do am advanced long form output research agent so this is great wish i could sponser but im hoping i can in 2 to 3 months when things are going better
That sounds really interesting! I'll spin up a copy and give it a try. I've been meaning to explore integrating Neo4j with OpenAI, hould be fun to see how it goes. Thanks!
Thank you for your breakdown. I found especially useful to understand which model is better for what task in an agentic framework. I'm guessing o1 will also be quite expensive on top of the slower speed, so it should be reserved for mostly complex tasks where they need to follow steps to the T
Adopting the Chainlit task list definitely enhances the responsive feel but given the slow speed of o1 I’d recommend generating a unique project id that you and the agent work on and use the webhook session restore to report status messages. Each message can have preset actions that replace your slash commands. That will enable you to offer feedback on the fly, instead of aggregating all the feedback into a single message. That slightly increases the complexity, more or less given the construction of your langgraph state classes. I cloned the repo last night, hopefully I’ll have a chance to look at the code and see if there’s any PR I could do.
Thank you for sharing this and explaining it in such detail. I get really excited when I see a new video of yours pop up on my subs tab. Curious, how big of an improvement do you think this is compared to just using gpt4o for all the agents?
Really great work and thanks a lot for this video with all this additional information. While going through the code and prompts, I somehow get stucked with COGOR and the prompt. - This kind of promptimg seems to be quite unique, could not find much additional information besides the 2 links you provided. Do I miss somehting, do I search for the wrong keywords? - In the prompt, you are using the phrase "pyhton tool", which confused me a lot, since I could not find a python tool in your. Meanwhile I understod, that you refer to markdown python tool. But still wondering, might the LLM be confused as well? Why is this so important anyway? - Why are you using these icons, they confused me a lot, and again, maybe the LLM as well. Would it not be easier to have words with a clear semantic? I was not sure, where to put these questions, here in the comments or in the repo in the issues, or ...? Anyways, looking forward to your answers.
Awesome work here! 👏 For those of us that are not Blessed with having OpenAI Tier-5 access, is it possible to offer an OpenRouter setting in this project so we can give this a try? 😁
I've been away for a while, and I'm thrilled to return and see the amazing progress and new features that have been implemented. It all makes perfect sense, and I’m excited to try this out. Thank you for your hard work!
Love what you are up to with Jar3d.
Best name for an app ever
Ive been away for a while, not by choice but excited to catch up on my favorite channel, this looks great so far. You are doing exactlt what i wanted to do am advanced long form output research agent so this is great wish i could sponser but im hoping i can in 2 to 3 months when things are going better
That sounds really interesting! I'll spin up a copy and give it a try. I've been meaning to explore integrating Neo4j with OpenAI, hould be fun to see how it goes. Thanks!
Thank you for your breakdown. I found especially useful to understand which model is better for what task in an agentic framework. I'm guessing o1 will also be quite expensive on top of the slower speed, so it should be reserved for mostly complex tasks where they need to follow steps to the T
Adopting the Chainlit task list definitely enhances the responsive feel but given the slow speed of o1 I’d recommend generating a unique project id that you and the agent work on and use the webhook session restore to report status messages. Each message can have preset actions that replace your slash commands. That will enable you to offer feedback on the fly, instead of aggregating all the feedback into a single message.
That slightly increases the complexity, more or less given the construction of your langgraph state classes. I cloned the repo last night, hopefully I’ll have a chance to look at the code and see if there’s any PR I could do.
This sounds awesome. I appreciate this will be a lot of work but it will certainly enhance the user experience.
I really love your videos
WOW. glad you popped on my feed ! Question : is it possible to change the system prompt ?
Very cool, subscribed
Great work
If you could choose one AI agent framework that you believe will become the industry standard, which one would it be?
Thank you for sharing this and explaining it in such detail. I get really excited when I see a new video of yours pop up on my subs tab. Curious, how big of an improvement do you think this is compared to just using gpt4o for all the agents?
Really great work and thanks a lot for this video with all this additional information.
While going through the code and prompts, I somehow get stucked with COGOR and the prompt.
- This kind of promptimg seems to be quite unique, could not find much additional information besides the 2 links you provided. Do I miss somehting, do I search for the wrong keywords?
- In the prompt, you are using the phrase "pyhton tool", which confused me a lot, since I could not find a python tool in your. Meanwhile I understod, that you refer to markdown python tool. But still wondering, might the LLM be confused as well? Why is this so important anyway?
- Why are you using these icons, they confused me a lot, and again, maybe the LLM as well. Would it not be easier to have words with a clear semantic?
I was not sure, where to put these questions, here in the comments or in the repo in the issues, or ...?
Anyways, looking forward to your answers.
very good , i was not actually sure how to compile up a docker : !!
Awesome work here! 👏
For those of us that are not Blessed with having OpenAI Tier-5 access, is it possible to offer an OpenRouter setting in this project so we can give this a try? 😁
It is awesome, I am going to contact you once I redo my project with the new concepts. I think I can use your help.
Nice love your work.
can we run this with local llm olama models?
Anyone can get access to the o1-preview model via open router
EIC not applicable to UK companies.
Dude is an ai video himself
How dare you compare this to perplexity that a click away it literally took you an hour to explain how to use your tool