@@juanjesusligero391 Explain it easy real life example I don't know about it😢 Update: Now I know the 0 shot Prompting. It feels Scam 😂 0-shot and 4-shot prompting in Conversational Transformers (CoT): *0-Shot Prompting:* - No examples provided - Model relies on pre-training and context - Higher risk of inaccurate responses - Requires robust model understanding *4-Shot Prompting:* - Four relevant examples provided - Model learns from context and examples - Improves accuracy and relevance - Enhances model's ability to generalize In summary, 4-shot prompting provides more context, leading to better performance, while 0-shot relies on the model's pre-training and understanding.
1:04 The difference is in the prompting: 4-shot CoT vs 0-shot CoT
bro roasted IBM
Totally 😂, They were leading this space a decade ago, lost their edge and great opportunity somehow.
Please provide the links of these.
Why math better
Function calling?
I think math is better in Gemini because they are comparing apples to oranges ("4-shot CoT" vs "0-shot CoT")
If you reply this as a separate comment, I'll pin it
@@juanjesusligero391 Explain it easy real life example
I don't know about it😢
Update:
Now I know the 0 shot Prompting.
It feels Scam 😂
0-shot and 4-shot prompting in Conversational Transformers (CoT):
*0-Shot Prompting:*
- No examples provided
- Model relies on pre-training and context
- Higher risk of inaccurate responses
- Requires robust model understanding
*4-Shot Prompting:*
- Four relevant examples provided
- Model learns from context and examples
- Improves accuracy and relevance
- Enhances model's ability to generalize
In summary, 4-shot prompting provides more context, leading to better performance, while 0-shot relies on the model's pre-training and understanding.
Due to code execution?
need time stamps
Not so secret anymore..
Here first
🥇