@@Mark_Kashef Way to little praise on the internet today ;) I have worked with education and making courses for years. I might not be a leading authority on the subject of teaching, but I know what works for me and how I react to content. In your case, if I may. First off all, you come through as a honest and likable guy. Easy and relaxed. Very easy to listen to. And you have a nice way to explain things. So I stayed. watched it all and subscribed. You also seem to have an eye for what makes for good content. So again. Well done, and I´m looking forward to more on this channel.
The internet can be a fickle place, but one comment like this can really make someone’s day so thank you! Always motivating to keep pumping out good content, and thanks for letting me know that it’s working well. Will keep it up!
Fire video Mark. Stealing some of these prompt secrets for my next video on o1 preview. I will have my assistant send you a royalty check within the next 7-10 business days
The only reason no system messages are "needed" is because they aren't available. You only need 2 prompts: your initial prompt, and this: "Think hard. Revisit your previous response and fortify it". Prepare for magic.
@@Mark_Kashef I spent most of the time to make the platform... I can update the course to follow the trends and updates... plus I have some grounded experience, on different use cases (such as RAG implementation on Slave languages), which can't be found (yet) on the web,... because it's for another kind of public... but if the App grow, I'd like to go there too, ... or maybe I should do that only? hmmm
@@AI-Easy-App we have a tendency to think that since X thousands of people are aware of these updates and new toys, that the average joe is also aware. That couldn't be further from the truth -- I teach at companies where most folks still don't know what a custom gpt is let alone, what the difference between gpt4o and o1 are. I think that as long as you're delivering value that synthesizes a subject matter otherwise difficult to digest, you'll be more than golden without constantly running to incorporate the latest thing!
Great video, could you please make a prompt generator of o1 Mini (since I read is better for coding creation, but not so much at debugging) for coding tasks with good coding practices like naming variables, make descriptions what does that code, etc. please
Thanks very much - I try to make my content as accessible as possible to coders and non coders; that being said I’m going to make a video on creating your own prompt engineer
@@Mark_Kashef that would be awesome since I have none coding experience, would be great to make a “system prompt” for tools like cursor ai, so those tools can generate great apps with simple prompts
Unless your day-to-day use of LLMs requires a lot of number crunching, counting, or other mathematical operations, I wouldn't upgrade just for o1. As always, other models from other vendors will come out to place more pressure on OpenAI to eventually release at least o1-mini to the free users with some form of limit (my guess)
@@Mark_Kashef Also i heard it's more cost effective to d the API thing and pay for the exact ammount of usage. And ways to even use models in aggregrate, or a way that identifies which model to use to answer your prompt (i..e the free one, or the paid one). What's your takes on that
@@Corteum Totally depends on your usage and use case; with API, you lose the ability to 'easily' upload files, images etc, and you have to step into using code and more sophisticated libraries to accomplish the same thing. I've built my own 'ChatGPT' powered by API, but always take for granted that even keeping a back and fourth conversation in memory is a quality of life feature I appreciate about the front-end chatgpt.
Is it possible to apply any of the techniques that o1 uses to gpt 4o to improve its responses? Possibly through custom instructions? I find gpt 4o very frustrating to work with but I have also seen evidence that telling it to show its chain of thought and working does improve its logic
Awesome question, literally one of my next videos haha - one word; ruminate Will help GPT4o try and actually reflect instead of following ‘step by step’
@@Mark_KashefThanks Mark. Do you have an example of how to use this in a prompt? Would you use it in CI or just the prompt? I’m a man with ADHD so I struggle with abstract concepts. So I use 4o to help me break down abstract goals and concepts, it doesn’t always succeed with this but I find that the quality really begins to lack, possibly because I have lengthy conversations in one chat over multiple responses
I'm not an expert, but are you sure that the '01 models can't benefit from longer prompt? I understand why you don't need to specify chain of thought anymore, but I find that when I test it out, the more multifaceted I make the prompt the better response I get often.
Thanks for the question! The way the model is designed, it tries to be thoughtful about whatever you provide it as input. If I load a 3-5 page prompt, there's a chance that it reads through and reflects through each section, and not factor in the whole prompt holistically while doing so. Especially since at the moment we don't have as much control as to which parts of a prompt the model should ruminate about versus others. This will get even more emphasized when they release the ability to upload files and browse, as the 'diverted attention' of analyzing a file and a long prompt will lead to some problems. I would guess that's the reason they didn't enable uploading a file to begin with, since it might condition behaviour to 'provide what you need' in piecemeal fashion. My two cents :)
Here is an interesting example of giving o1 a long, detailed prompt for a complex business problem and have it reason through the solution: ua-cam.com/video/k6U4GwYok7w/v-deo.htmlsi=PpxirMXYrf3net1s . It should be ok to give a long prompt, but it should not contain fluff or irrelevant information.
I enjoyed this video. We are building a platform for testing LLM apps, including prompts. I think you might find it useful for your agency. Would love your feedback either way. Up for a chat?
I try to keep things as accessible as possible instead of going into papers and taking an academic lens on it; main goal of this channel is to help as many folks as possible who wouldn't have the time to dive into the 'why' versus the 'what' or 'how' -- that said, thanks for your feedback!
Your prompt vids are simply golden!
thank you VAPI King 👑
Can never skip to watch Mark's video, they are just that good
appreciate you as always!
Awesome video and appreciate you putting it all into a GPT, looking forward to trying it out
my pleasure! Thanks so much Adam, appreciate the feedback
I’m starting to appreciate a lot more why you chose the brand Prompt Advisors.
Love you more!❤
Super intéressant!!!❤
Merci!
Great content. Very well made. Thank you.
thank you so much! Means a lot :)
@@Mark_Kashef Way to little praise on the internet today ;) I have worked with education and making courses for years. I might not be a leading authority on the subject of teaching, but I know what works for me and how I react to content.
In your case, if I may. First off all, you come through as a honest and likable guy. Easy and relaxed. Very easy to listen to. And you have a nice way to explain things. So I stayed. watched it all and subscribed. You also seem to have an eye for what makes for good content. So again. Well done, and I´m looking forward to more on this channel.
The internet can be a fickle place, but one comment like this can really make someone’s day so thank you!
Always motivating to keep pumping out good content, and thanks for letting me know that it’s working well. Will keep it up!
As always packed full of value!💪
Thank you so much for the kind words 🦾
Nice.more videos on prompt engineering ❤❤❤❤
I was about to pivot to something else but had to drop this 💫
Thank you, very helpful. This has been my first time watch of your videos, coongratulations for a job well done.
thanks so much Marco! much appreciate your kind words; hopefully can keep pushing out content you'll like 🦾
Fire video Mark. Stealing some of these prompt secrets for my next video on o1 preview. I will have my assistant send you a royalty check within the next 7-10 business days
Hahaha steal away brother - sharing is caring 🦾
Thank you! This was quite helpful
Yay! Pumped to hear that, appreciate the feedback 🦾
The only reason no system messages are "needed" is because they aren't available. You only need 2 prompts: your initial prompt, and this: "Think hard. Revisit your previous response and fortify it". Prepare for magic.
great point, thanks for sharing!
You Are Awesome Brother, Thank You ☝
Thank you so much! Glad you liked it 🦾
Ah, comon, I just finished my App teaching prompt engineering, and you tell me that it is already obsolete 😇 This is moving so fast indeed!
I'm sure there's still gold in there!
There's a reason I haven't come up with my own course haha, by the time the ink is dry, something new comes out.
@@Mark_Kashef I spent most of the time to make the platform... I can update the course to follow the trends and updates... plus I have some grounded experience, on different use cases (such as RAG implementation on Slave languages), which can't be found (yet) on the web,... because it's for another kind of public... but if the App grow, I'd like to go there too, ... or maybe I should do that only? hmmm
@@AI-Easy-App we have a tendency to think that since X thousands of people are aware of these updates and new toys, that the average joe is also aware.
That couldn't be further from the truth -- I teach at companies where most folks still don't know what a custom gpt is let alone, what the difference between gpt4o and o1 are.
I think that as long as you're delivering value that synthesizes a subject matter otherwise difficult to digest, you'll be more than golden without constantly running to incorporate the latest thing!
Thank you Mark! 👊
My pleasure! 🦾🤝
Thanks Mark!!!
My pleasure! Thanks for watching 🦾
Great video, could you please make a prompt generator of o1 Mini (since I read is better for coding creation, but not so much at debugging) for coding tasks with good coding practices like naming variables, make descriptions what does that code, etc. please
Thanks very much - I try to make my content as accessible as possible to coders and non coders; that being said I’m going to make a video on creating your own prompt engineer
@@Mark_Kashef that would be awesome since I have none coding experience, would be great to make a “system prompt” for tools like cursor ai, so those tools can generate great apps with simple prompts
Hi Mark, where can I find your GPT prompt-converter-anything-to-o1? Thanks
it's in the Gumroad link in the description: bit.ly/3B1qRqz
you'll see a button in the package on the bottom left that links to it: paste.pics/RYDSL
Is it worth getting the subscription to have access to o1, or is the free gpt version enough? what reasons would warrant upgrading to subscription o1?
Unless your day-to-day use of LLMs requires a lot of number crunching, counting, or other mathematical operations, I wouldn't upgrade just for o1.
As always, other models from other vendors will come out to place more pressure on OpenAI to eventually release at least o1-mini to the free users with some form of limit (my guess)
@@Mark_Kashef Also i heard it's more cost effective to d the API thing and pay for the exact ammount of usage. And ways to even use models in aggregrate, or a way that identifies which model to use to answer your prompt (i..e the free one, or the paid one). What's your takes on that
@@Corteum Totally depends on your usage and use case; with API, you lose the ability to 'easily' upload files, images etc, and you have to step into using code and more sophisticated libraries to accomplish the same thing.
I've built my own 'ChatGPT' powered by API, but always take for granted that even keeping a back and fourth conversation in memory is a quality of life feature I appreciate about the front-end chatgpt.
Is it possible to apply any of the techniques that o1 uses to gpt 4o to improve its responses? Possibly through custom instructions? I find gpt 4o very frustrating to work with but I have also seen evidence that telling it to show its chain of thought and working does improve its logic
Awesome question, literally one of my next videos haha - one word; ruminate
Will help GPT4o try and actually reflect instead of following ‘step by step’
@@Mark_KashefThanks Mark. Do you have an example of how to use this in a prompt? Would you use it in CI or just the prompt?
I’m a man with ADHD so I struggle with abstract concepts. So I use 4o to help me break down abstract goals and concepts, it doesn’t always succeed with this but I find that the quality really begins to lack, possibly because I have lengthy conversations in one chat over multiple responses
@@AdamB1_23 this Medium article will help a lot with this; imperfect, but better way to use gpt4o for this use case:
bit.ly/3XqGzTC
I'm not an expert, but are you sure that the '01 models can't benefit from longer prompt? I understand why you don't need to specify chain of thought anymore, but I find that when I test it out, the more multifaceted I make the prompt the better response I get often.
Thanks for the question!
The way the model is designed, it tries to be thoughtful about whatever you provide it as input. If I load a 3-5 page prompt, there's a chance that it reads through and reflects through each section, and not factor in the whole prompt holistically while doing so.
Especially since at the moment we don't have as much control as to which parts of a prompt the model should ruminate about versus others.
This will get even more emphasized when they release the ability to upload files and browse, as the 'diverted attention' of analyzing a file and a long prompt will lead to some problems. I would guess that's the reason they didn't enable uploading a file to begin with, since it might condition behaviour to 'provide what you need' in piecemeal fashion.
My two cents :)
Here is an interesting example of giving o1 a long, detailed prompt for a complex business problem and have it reason through the solution: ua-cam.com/video/k6U4GwYok7w/v-deo.htmlsi=PpxirMXYrf3net1s . It should be ok to give a long prompt, but it should not contain fluff or irrelevant information.
agreed 👍🏼 you have to keep the value per unit token high with no fat
I enjoyed this video. We are building a platform for testing LLM apps, including prompts. I think you might find it useful for your agency. Would love your feedback either way. Up for a chat?
For sure! Just saw your email 📧 will respond in kind.
the overthinking it does makes me absolutely insane lol
I literally use it maybe once a week haha -- not amazing until they add the ability to control the time spent thinking :)
the background ticking sound is annoying
noted, thanks for the feedback
💪🎯🎬🦅🤙🙏
Alot of advice but not enough explanation of why.
I try to keep things as accessible as possible instead of going into papers and taking an academic lens on it; main goal of this channel is to help as many folks as possible who wouldn't have the time to dive into the 'why' versus the 'what' or 'how' -- that said, thanks for your feedback!
First
pretty basic but ok
compared to typical prompting required, it’s indeed more simple; shift in prompting mindset is required