The whole point of LLMs is the ability to interact with them in natural language, directly. If that is gone, then FMs should be built around automation and NOT using English.
you are still interacting (as an end user) via natural language. This is showing how to best make sota prompting techniques available to masses without them having to learn them.
I've met a lot of people skeptical of DSPy and these kinds of videos do nothing to dispel the skepticism. I'm 10 minutes in and we haven't seen any examples of how this is any different from ordinary prompting with an LLM. The "goal" that he describes is literally just the prompt without explicit CoT language and CoT language will probably be unnecessary with stronger models, which will better infer when they need to do CoT to reach a good result (excluding cases where output is coerced in JSON mode, etc...).
I literally paused at 16:44 to read the produced prompt. It's fine. But, you literally had to do all the work to get there. And, I'm not sure that's substantially less than writing the prompt yourself, especially when you are going to get GPT-4 to write the first version of the prompt for you (remember turbo-preview knows what LLM prompts are).
Very good Omar Khattab
يحفظكم الله ويرعاكم ويبارك فيكم
why do you think he's muslim? He might already passed this stage in human development
Great! Thank you!
dank
🙏
The whole point of LLMs is the ability to interact with them in natural language, directly. If that is gone, then FMs should be built around automation and NOT using English.
you are still interacting (as an end user) via natural language. This is showing how to best make sota prompting techniques available to masses without them having to learn them.
I've met a lot of people skeptical of DSPy and these kinds of videos do nothing to dispel the skepticism. I'm 10 minutes in and we haven't seen any examples of how this is any different from ordinary prompting with an LLM. The "goal" that he describes is literally just the prompt without explicit CoT language and CoT language will probably be unnecessary with stronger models, which will better infer when they need to do CoT to reach a good result (excluding cases where output is coerced in JSON mode, etc...).
I literally paused at 16:44 to read the produced prompt. It's fine. But, you literally had to do all the work to get there. And, I'm not sure that's substantially less than writing the prompt yourself, especially when you are going to get GPT-4 to write the first version of the prompt for you (remember turbo-preview knows what LLM prompts are).
The magic is in the compiling engine underneath. The optimization will get better with open source contribution.
Another rubbish presentation on DSPy. Do these people really understand it. Just a regurgitation of the documents