Thanks for sharing this review about DSPy, it's amazing. I'm very excited about this framework. However, I don't understand how to create a DSPy system capable of sending different queries in a single request. For example, imagine that the program receives queries={'key_1':'query1', 'key_2':'query2', ...}, snippets={'key_1':'snippets1','key_2':'snippets2' , ...} and returns the response in JSON format answer={'key_1':'answer1', ...,key_n:'answer2'} in a single call. That is, I have been able to design a zero-shot program that is capable of answering different questions in a single call, but it is not at all intuitive when it comes to optimizing it. P.S. To make the output a dictionary I used 'type_predictors', that is, Output(Basemodel) defines each of the keys key_1,key_2, etc...
This seems like one-way, what if I wanted to reflect on the LLM's output? like if I asked it to write code and then invoke exec to run it but get an error, how can I update context and try again?
Perhaps your ears need a tune-up. I love foreign language accents. I love learning new things. I love this channel's wisdom. This lecture and tutorial is one of the very best available.
Great walkthrough. Been having a lot of fun with DSPy.
Thanks for sharing this review about DSPy, it's amazing. I'm very excited about this framework.
However, I don't understand how to create a DSPy system capable of sending different queries in a single request. For example, imagine that the program receives queries={'key_1':'query1', 'key_2':'query2', ...}, snippets={'key_1':'snippets1','key_2':'snippets2' , ...} and returns the response in JSON format answer={'key_1':'answer1', ...,key_n:'answer2'} in a single call. That is, I have been able to design a zero-shot program that is capable of answering different questions in a single call, but it is not at all intuitive when it comes to optimizing it.
P.S. To make the output a dictionary I used 'type_predictors', that is, Output(Basemodel) defines each of the keys key_1,key_2, etc...
27:17 I like that 🤣 because it's awesome
This seems like one-way, what if I wanted to reflect on the LLM's output? like if I asked it to write code and then invoke exec to run it but get an error, how can I update context and try again?
I really like this!
i would be helpful if you could show what line(s) of code you are refering to, it's sometimes hard to guess
Does Qdrant work with OpenAI embeddings? Does Qdrant DSPy work with OpenAI embeddings?
yes
I've thought that DSPy was something like "Digital Signal Processing e.g. DSP for Python" ;-0
They changed the name after the second revision. It's now pronounced DS"pie" DSPy
Why the default llm is always openai when gemini pro access is free?
we have been seeing better quality and performance with openai. have you seen different?
@@qdrant I mean, why not have gemini as an option at least ?
@@ppbroAI I agree with qdrant, gemini is just not there yet
This video is great but also wildly similar to this other video: ua-cam.com/video/41EfOY0Ldkc/v-deo.html
Flipper?
Yep🙂
Sadly dont want to sound harsh but your accent and monotone is putting me to sleep :(
This is complex topic and alot of people trying to explain but this one is the best.
???
@@truliapro7112
Perhaps your ears need a tune-up. I love foreign language accents. I love learning new things. I love this channel's wisdom. This lecture and tutorial is one of the very best available.