Connor - you can pan in and out as much as you want IMO, shows your excitement about the subject. The quality of the content is awesome. Also appreciate the shoutouts to the broader community. Thanks for sharing!
Thank you so much!! Haha, appreciated! Although I see far less negative comments this time around without the zooming haha! Thank you so much! Beyond grateful for all the help the DSPy community has given me in learning about this!
Thanks for the detailed video Connor, This is a great help. I am working on the lang graph and multi agent models. I had to optimise some of my prompts manually to reduce the number of agent hops to llm model. With BaysianSignature optimizer, I believe every prompt can be optimised and it'll reduce the hops made by agents
Nice video, Connor. Could you do a more in-depth video on the optimization process? In particular, looking at the series of prompts/examples selected throughout the optimization (analogous to doing a small lin. regression/backprop example by hand for intuition) and the overall token cost of these optimizations.
Ah indeed this is quite the test of my comprehension! Thank you so much, this is a fantastic idea! Give me a little bit of time to work through this though -- I will send this message through to the DSPy discord, I'm sure Omar, Michael, or Krista would be happy to walk us through this!
This was a fantastic walkthrough! Would love some insight into extracting structured data - I find this extremely useful, and being able to do this with a 7B/13B model (instead of GPT-4, for instance) would greatly decrease the cost of running my application. Thanks so much!
Thanks for the great content. One of the things I am missing is how to save the optimized program so I can use it after that without constantly re-training.
You should also make a video on each GPT call cost. I believe there are hundreds (if not thousands) of calls happening every execution. DSPy is best paired with local model like mistral 7b. Otherwise, it will be impossible to scale such a tool on hundreds of docs.
If you are not compiling the program there aren't that many calls happening... Just one call per module (ChainOfThough/Predict/ReAct..) actually. You can check every step in the pipeline with dspy-inspector for example
Yes this is exactly where my thinking is going after getting the bill for the video hahah, but more generally yeah my suspicion is that llamas connected in DSPy programs is where the value is -- need to test more to say for sure!
@@connor-shorten Can you make a video about it? and also another video on some of the pros & cons of using DSPy in production (if any - wrt cost, latency, scalability, & flexibility)?
Hay man, great video! I have a few questions tho. Can you use other vector DB as retriever like Milvus? Also, is it possible to use LLM that are less known like Baichuan, Kimi etc? Thank you!
Is it possible to run DSPY on local windows environment, say with Mistral 7b model? It fails to for me because of default value of url param, which I do know how to avoid.
No offense but you took 4 min to get to the point. Time is the most valuable resource. And it is even more important with the plethora of information in the fast moving generative AI space. So please be ruthless in cutting down non-value add content. I'm not criticizing but I'm just voicing an opinion on great videos like this where you can be even more due diligent on our audience time
so for production, we'd just copy paste the optimized signature and few-shots into the system prompt? I'm a bit confused on how to wield this tool in production.
DSPartY is bumpin’
Love your tutorials and the engagement you’ve been putting in within the DSPy community.
Connor - you can pan in and out as much as you want IMO, shows your excitement about the subject. The quality of the content is awesome. Also appreciate the shoutouts to the broader community. Thanks for sharing!
Thank you so much!! Haha, appreciated! Although I see far less negative comments this time around without the zooming haha! Thank you so much! Beyond grateful for all the help the DSPy community has given me in learning about this!
Thanks for the detailed video Connor, This is a great help. I am working on the lang graph and multi agent models. I had to optimise some of my prompts manually to reduce the number of agent hops to llm model. With BaysianSignature optimizer, I believe every prompt can be optimised and it'll reduce the hops made by agents
Ah, I need to learn a little more about LangGraph and I think Crew AI as the latest Multi-Agent framework before I can really comment on this!
Great one Connor! good to see your progress that, naturally, help us all...
Thank you so much Jose, really happy to hear that! Learning a new tool is certainly quite the journey haha!
great tutorial Connor, looking forward to more advanced stuff like Agent application!
Brilliant Connor, thanks so much for this video and looking forward to more about this subject!
Thank you so much Brad! Really happy to hear it! DSPy!
Thank you Connor. This is exactly what the world needed.
Thank you so much Clint, appreciate it as always!
Very exciting stuff, thanks Connor!
Thanks Daniel!
Nice video, Connor. Could you do a more in-depth video on the optimization process? In particular, looking at the series of prompts/examples selected throughout the optimization (analogous to doing a small lin. regression/backprop example by hand for intuition) and the overall token cost of these optimizations.
Ah indeed this is quite the test of my comprehension! Thank you so much, this is a fantastic idea! Give me a little bit of time to work through this though -- I will send this message through to the DSPy discord, I'm sure Omar, Michael, or Krista would be happy to walk us through this!
looking fresh my dude
Haha thank you!
Great tutorial! I'm looking forward to building on this! Thank you
Great walkthrough, thanks so much!
So many haters wtf. Great video !! I been lazy in python bc of copy paste and langchain and llama-index. This video makes python more fun !
Thank You. This particular vid motivated me to SUBSCRIBE !
Amazing video! Thank you so much! Could you please make a video about how to optimize DSPy with structured output?
Conner - Thanks for the shoutout!! ❤
No thank *you*! Ollama in DSPy!! Amazing!
Thank you, Connor! Keep it up!
Thanks!
Awesome, super inspiring!
This was a fantastic walkthrough! Would love some insight into extracting structured data - I find this extremely useful, and being able to do this with a 7B/13B model (instead of GPT-4, for instance) would greatly decrease the cost of running my application. Thanks so much!
Thanks for the great content. One of the things I am missing is how to save the optimized program so I can use it after that without constantly re-training.
You should also make a video on each GPT call cost. I believe there are hundreds (if not thousands) of calls happening every execution. DSPy is best paired with local model like mistral 7b. Otherwise, it will be impossible to scale such a tool on hundreds of docs.
If you are not compiling the program there aren't that many calls happening... Just one call per module (ChainOfThough/Predict/ReAct..) actually. You can check every step in the pipeline with dspy-inspector for example
Yes this is exactly where my thinking is going after getting the bill for the video hahah, but more generally yeah my suspicion is that llamas connected in DSPy programs is where the value is -- need to test more to say for sure!
@@neoxelox Ah thank you! A new DSPy tool to try out haha `dspy-inspector`!
@@connor-shorten Can you make a video about it? and also another video on some of the pros & cons of using DSPy in production (if any - wrt cost, latency, scalability, & flexibility)?
Hay man, great video! I have a few questions tho. Can you use other vector DB as retriever like Milvus? Also, is it possible to use LLM that are less known like Baichuan, Kimi etc? Thank you!
Thank you for fixing the zooming!
Haha you got it! Apologies for last time! The zooming has been fired!
could we cover the creation of the schema from an empty database such that the notebook flow actually runs through
lets gooo
Is it possible to run DSPY on local windows environment, say with Mistral 7b model? It fails to for me because of default value of url param, which I do know how to avoid.
How can I load and use my own data to Weaviate and start implementing DSpy's implementation of RAG?
You tried with weaviem is there any way you could do with pinecone ?
Any idea why the bootstrap with random search performed worse on the eval set? @ 29:00
What we can do to have bigger answers? I want it to generate code, but after executing it gives me 4 lines of code
Someone have some idea?
Can you apply DSPy RAG on PDF files?
No offense but you took 4 min to get to the point. Time is the most valuable resource. And it is even more important with the plethora of information in the fast moving generative AI space. So please be ruthless in cutting down non-value add content. I'm not criticizing but I'm just voicing an opinion on great videos like this where you can be even more due diligent on our audience time
This comment took you longer than 4
@@edwardgao5388 true. I did that to see if the OP can save me some time in the future
Thanks. Saved my 4 minutes 😅
so for production, we'd just copy paste the optimized signature and few-shots into the system prompt? I'm a bit confused on how to wield this tool in production.