My new program is now live! Apply fast: skool.com/makerschool/about Get your first automation customer (guaranteed). Price increases every 10 members 🙏😤
Watched the vid a couple of days ago and finally got time to build it today, what an amazing piece of system Nick, so generous of you to share it! 🙌🏼 I might need to fix a bit the prompt though, it went too overboard with the spartanism 😂 "Picture this: a world where the burdens of HR are lifted by the Herculean strength of technology. That's HR automation for you, a smart sidekick in the epic battle against HR drudgery. It's like outfitting your HR team with shield and spear, and letting them take on an army of tasks without breaking a sweat. Automation in human resources? It's not just a fancy phrase; it's revolutionizing the game."
Nick, thought I knew Make. You made me realize I don’t. I also blamed Make for what I thought were its limitations. Now I realize the issue is 9 out of 10 times with the operator. 😂
He’s currently building demand. I also suspect he implements these sorts of things for you when you pay for his consulting via Leftclick. Leftclick is basically paying for his combination of expertise and his automation blueprints, I think. Nick, chime in if that’s not correct.
Subscribed! Your video is super helpful. Glad YT recommended you. Reminds me of when I discovered Alex Hormozi before he blew up the Internet Marketing space. The value in your content is off the charts good. I learned something profound from the first 2 videos I watched. I’ll be tuning in regular to see what else you put out. Bravo on deciding to post on YT regularly. You are a breath of fresh air… a real world user sharing the secret sauce.
Amazing value in this video. 100% subscribed and will watch all your videos. Question: The outlines it produces continue to output 10+ Heading 2 sections, resulting in a really long final piece (5000+ words) even though I'm setting word count to under 1500 (I know GPT struggles with counting words). I tried to solve this by setting the Max Tokens field to something like 300-400, but what ends up happening is that the system tries to still create a really long outline, but gets cut off and the output isn't a finished outline. Any thoughts?
Hey Nick, this is an amazing tutorial! I found your video while building something similar. Could you zoom in (in another video) on your prompting and how you implement it in Make, seems to be a great hack for creating amazing content. Thanks!
@22:39 is it me or did you skip over the Parse JSON module that broke? i see this is the module that parses the json that contains the category and tags... but it is not working for me. I am getting an error from wordpress, re arrays and integers. eitherway, thanks for all the videos! good stuff.
20:50 I cannot reference the variable above the router to the "Get variable" below, I tried running the module alone first to see if the option appears but nothing. Any ideas?
You know, this is a pretty pesky issue that tripped me up for a while. Make doesn't do a good job explaining it so I recorded a video: www.loom.com/share/db73a9502f8c427fb642b7c793acfa0f?sid=e4f70435-88ab-48dc-8865-e95b5b61dcb2 Hope this helps 🙏
This is awesome. I had one similar but this filled in a couple of missing pieces for me. There needs to be more parameters for word count though. Mine was also double the word count. Have you figured this out? Potentially it needs to be incorporated in a formula before the second OpenAI module?
Unfortunately word count is something LLMs just don't do well. They have no conceptual understanding of what a "word" is and the autoregressive way they work (i.e generating token N+1 based on tokens N-X... N-1, etc) makes this impossible iirc. You can get reasonably close with a few additional procedural methods though. 1. Add a switch module that takes as input the desired wordcount and outputs an intro prompt for GPT-4. 2. If wordcount is between 0-1K, set prompt to something like "be succinct". If wordcount is 1K-2K, maybe no prompt. If desired wordcount is higher then set prompt to something like "be comprehensive" etc. You could alternatively skip headings depending on projected word count as well, although I haven't found any of that necessary. Personally I just stick with whatever length it gives me, most SEO optimized pieces are on the longer end anyway. Hope this helps 🙏
Nick - are you filtering the router -→ random 25% change sectionText somehow? how does it know if it's gone through the 25% chance first to then go back thru the Make flow
I figured out how to fix the hard coded variables from typeform, but I'm stuck on the chat Message 4. The entire markdown article from Nick's run is populated in this field. The article should be an output but it's hard-coded as an input. Stumped.
if anyone else is stuck - delete the hard coded user messages, the ones that accept variables from previous modules are also in the chat module and will take over.
Great stuff Nick! Quick question - at 13:10 in the video, when adding the folder ID to the google doc, I can only seem to add existing folders rather than the folder ID tag. Any ideas?
use the defaults to select the folder you want. Even though his screen shows manual - use the default to select the folder where you want the folder/files to be created
Hey Jason! You can just drag and drop-should connect automatically. PS make sure "store incomplete executions" is enabled in the scenario settings first! Here's a help doc: www.make.com/en/help/scenarios/incomplete-executions#:~:text=To%20enable%20it%2C%20enable%20the,to%20the%20Incomplete%20executions%20folder.
Great video. I'm running into a 2000 character limit in my notion database. Does google sheeets have the same limitation? any advise on how to work around the 2000 character limit?
The world of automation is run by iterators 🎉 Or at least in Make. Awesome vid 🚀 Ps. Challenge: Do you know of a “free” way of adding in-text images into a Webflow Rich Text Field? So upload images somewhere, get a public link (inc file extension) and then replacing the in-text placeholders. Webflow only accepts public image URLs including file extension in the rich text field. Discord works okay, but it started to “error out” on us. Not good when you have a backlog of 300 articles. Anyway keep up the good momentum 💪
Would be fun to see how we could make a frontend website for users that would want to use a service like this. Make a dashboard of sorts and have users be able to sign up to use a similar automation and have it display on the site instead.
Hey Nick - great video, it's awesome, thanks! Got a question about the 25% split. From your video, I couldn't see where it was referencing the section text, it just had your example text. For us it's just generating an output from the example text itself, rather than using it just as a witing guide. I can see that you have an item #5, but we can't see what's in it....
Hey Paul, thanks for the love man. I left this comment below but will paste it here for convenience too: Message 5 is a User prompt and it's just the output of the previous GPT-4 module. The idea here is that, 25% of the time, I'm taking the generation and passing it through an additional step that "reformats" it into bullet points etc. The exact variable is {{14.choices[].message.content}}. If you copy that into the Message Content field of Message 5 and replace "14" with whatever number your module is should pull correctly. Hope this helps 🙏
I've been building a somewhat similar dashboard product, but mine has a lot of other features like natural language editing, approval, automated research, etc. I'd love to trade notes.
The scenario works well but after running some AI detection tests, it's showing that most of these articles are 40%-50% detected as AI written. What can we do to lower the score whilst automating?
So I'm strugglin' a little here. Sometimes my final blog output is in html, and sometimes it's formatted correctly. What could be causing the varying results? I'll likely take this question to the FB group, but maybe someone here has already had/resolved this issue.
Hey Artem-you have to click on the gear icon at the bottom of the scenario builder (called "Scenario Settings") and then check "Allow Storing of Incomplete Executions". Then you can drag and drop the Break next to any module and it'll automatically connect. I use the defaults most times (15min x 3). Hope this helps 🙏
Yes absolutely. My business partner and I experimented with this before around a year ago when AI voices were still iffy. We used screenshots from Reddit to build out a sort of "reddit comment" channel, and it ended up getting something like 20K views. ua-cam.com/video/6UWtz6h9Gec/v-deo.html You can do the same thing now with much better quality, and there's also a suite of "video AI" tools that let you generate visuals. If I had the time to do this again today I totally would, very fun.
Hi there, when the router hits the random number, it does run the top branch and creates a rewritten section that gets stored in sectionText, however I am struggling with how to configure Get sectionText to make sure it gets the updated sectionText and not the original sectionText?
Trying to build the scenario but I got stuck at step 1 - I cant figure how to set up the Typeform, Tools and Google sheets modules - the first 3. Then Nick starts with the 3rd module directly - can someone help. Thanks in Advance
What is the messge content for the "Produce Section Text" in item 5? I saw there's an item 5 and role is "User" but it didn't see on the screen of the message content. Can you comment here? Thank you very much.
Of course Jason! Message 5 is a User prompt and it's just the output of the previous GPT-4 module. The idea here is that, 25% of the time, I'm taking the generation and passing it through an additional step that "reformats" it into bullet points etc. The exact variable is {{14.choices[].message.content}}. If you copy that into the Message Content field of Message 5 and replace "14" with whatever number your module is should pull correctly. Hope this helps 🙏
This is great and super valuable information and looking forward to checking out your other videos…this channel is going to be huge 🎉 … one question in the content produce GPT step could you please share what you have in prompt #5? Thanks in advance Nick!
Appreciate this! Yes, prompt #5 is the output of the previous GPT-4 module. I'm pasting a previous reply here so you don't have to search for it: Message 5 is a User prompt and it's just the output of the previous GPT-4 module. The idea here is that, 25% of the time, I'm taking the generation and passing it through an additional step that "reformats" it into bullet points etc. The exact variable is {{14.choices[].message.content}}. If you copy that into the Message Content field of Message 5 and replace "14" with whatever number your module is should pull correctly. Hope this helps 🙏
thanks for that @@nicksaraev ! I just updated the field and the GPT module now produces this error message : ExecutionInterruptedError Execution was FORCED to stop because MAXIMUM EXECUTION and OVERLAY TIMEOUT [10 minutes] had elapsed Origin Make any idea why this happens?
Also, another question. I noticed that my output for the outline (And any long prompt on GPT completions) gets cut off. At first I thought it was due to tokens, so I stopped using gpt-4 turbo, and started using GPT-4 (The exact same model you're using in this video) but its still getting cut off. Any ideas? Thanks!
Hmm-could be token length? If you click under Advanced there'll be a setting to set maximum output tokens, I usually keep that high (2048) for content purposes.
@@nicksaraev You are a LIFE-SAVER! It was exactly that. Thanks a ton, man. You're putting out heavy gold through content. Let me know when you start a course or something, I'd be interested :) PS: Think you could upload this Blueprint please? Thanks a ton 🙏
Hmm. Probably one of two things: 1. The model isn't outputting markdown "ATX" format (specific flavor of markdown I always get it to generate. You could be using a dumber endpoint like GPT-3.5-turbo, which sometimes struggles with this) 2. You're not using a "Markdown to HTML" module after the output or it's not set up correctly. Check "Github flavored markdown" and choose "No" under sanitize Hope this helps 🙏
This is really useful in understanding the steps and overall workflow for this scenario. Thank you so much! Where is the best starting point for understanding the Message Content that you've chosen for each role (System, User, Assistant), within each ChatGPT module? I noted your comment in this video that many don't take advantage of these message inputs. Do you have an overview of this at all? Or can you point to a good resource for this?
Appreciate you! I don't have a specific walkthrough unfortunately. But tbh prompt engineering is very nascent and still more art than science-I would just spend a couple of hours playing with a cheaper model like GPT-3.5-Turbo to build some intuition. Rules of thumb: few-shot (2-3 user/assistant examples) almost always outperforms zero shot (no examples). Keep your prompt as short as possible. Play with temperature if you're outputting JSON-I generally stick to lower values for code and find it works better. Hope this helps 🙏
@@nicksaraev Thanks for your latest video - 3 ChatGPT Prompt Engineering Hacks You NEED to Start Using It definitely clarified how to best create and manage prompts.
It’s just a wall of text. The content looks good but you need more variation of content, bullet points, lists, tables and quotes. This will then enhance your readers engagement and improve the ranking.
@Nick Saraev All your video are pure magic. What is you magic for adding images to the post when then go to wordpress. I have seen others do it but what is you method. Thank You
While I make everyone goes crazy after tell them how money was make,insolation communication,digital slavery,chemical production marketing,biochemical weapon that changes human subject perspective behave..Hahaha what is the meaning of life because the interest was time for interface and the profit was attention..
My new program is now live! Apply fast: skool.com/makerschool/about
Get your first automation customer (guaranteed). Price increases every 10 members 🙏😤
Built this to the video specs, works flawlessly. Well done Sir! - from a fellow Canadian living thousands of miles south :)
hey @TheShouldonBernstein, can you show your flow? I got trouble with mine.
Wow so glad the algorithm sent your channel my way. Glad to be here before 1k 😊🔥🔥🔥
Glad to have you! Thanks.
Same here. Nice the Algo sent your channel my way
@@SevenOneTv. is this a sellable service?
@@Elite2235-q9z yes if you know how. i am currently sourcing for leads to sell the service. Got the automation to work after replacing some modules
Watched the vid a couple of days ago and finally got time to build it today, what an amazing piece of system Nick, so generous of you to share it! 🙌🏼
I might need to fix a bit the prompt though, it went too overboard with the spartanism 😂
"Picture this: a world where the burdens of HR are lifted by the Herculean strength of technology. That's HR automation for you, a smart sidekick in the epic battle against HR drudgery. It's like outfitting your HR team with shield and spear, and letting them take on an army of tasks without breaking a sweat. Automation in human resources? It's not just a fancy phrase; it's revolutionizing the game."
try 'conversational spartan' - seems to be less so
Funny output Santi 😂 agree with @TheSheldonBernstein
Nice content. Could you share the template??
I was hoping to get this as well but unfortunately it's not in his resources link. Hopefully he can add it because this is a monster!
@@NatGreenOnline Follow the video. It's all there and you can have it built and running within an hour. I did it and works flawlessly.
Follow the video. It's all there and you can have it built and running within an hour. I did it and works flawlessly.
Why would he share it if she “sells it for $5k”?
Can one substitute Google forms for type form?
Nick, thought I knew Make. You made me realize I don’t. I also blamed Make for what I thought were its limitations. Now I realize the issue is 9 out of 10 times with the operator. 😂
Hell of a compliment! Thank you 🙏
If your scenario template isn’t for sale, I’d like to learn the multi-prompt stacking details you’re using in the work flow. Ty
He’s currently building demand. I also suspect he implements these sorts of things for you when you pay for his consulting via Leftclick. Leftclick is basically paying for his combination of expertise and his automation blueprints, I think. Nick, chime in if that’s not correct.
Follow the video. It's all there and you can have it built and running within an hour. I did it and works flawlessly.
Just built this and linked to wordpress, works amazing. thank you a lot man
Glad it helped 🙏
I saved this video, I might need to go through it several times slowly, great work thanks for sharing Nick
My pleasure! If any of the Make.com stuff is unfamiliar check out my basics course-should help. Best of luck!
Great content, Nick! So well done and actionable. Your channel is gonna be huge!
Glad you think so! I really enjoy making these videos and will keep it up as long as I can 🙏
Subscribed! Your video is super helpful. Glad YT recommended you. Reminds me of when I discovered Alex Hormozi before he blew up the Internet Marketing space. The value in your content is off the charts good. I learned something profound from the first 2 videos I watched. I’ll be tuning in regular to see what else you put out.
Bravo on deciding to post on YT regularly. You are a breath of fresh air… a real world user sharing the secret sauce.
Hell ya Mark! Thanks so much for this feedback man.
Wow dude! Just wow ❤ bought the template. Appreciate you 🙏🏻
Appreciate you more man 🙏
Amazing value in this video. 100% subscribed and will watch all your videos. Question: The outlines it produces continue to output 10+ Heading 2 sections, resulting in a really long final piece (5000+ words) even though I'm setting word count to under 1500 (I know GPT struggles with counting words). I tried to solve this by setting the Max Tokens field to something like 300-400, but what ends up happening is that the system tries to still create a really long outline, but gets cut off and the output isn't a finished outline. Any thoughts?
Hey Nick, this is an amazing tutorial! I found your video while building something similar. Could you zoom in (in another video) on your prompting and how you implement it in Make, seems to be a great hack for creating amazing content. Thanks!
Sure thing-will add to the queue. Thank you!
@22:39 is it me or did you skip over the Parse JSON module that broke? i see this is the module that parses the json that contains the category and tags... but it is not working for me. I am getting an error from wordpress, re arrays and integers.
eitherway, thanks for all the videos! good stuff.
20:50 I cannot reference the variable above the router to the "Get variable" below, I tried running the module alone first to see if the option appears but nothing. Any ideas?
You know, this is a pretty pesky issue that tripped me up for a while. Make doesn't do a good job explaining it so I recorded a video:
www.loom.com/share/db73a9502f8c427fb642b7c793acfa0f?sid=e4f70435-88ab-48dc-8865-e95b5b61dcb2
Hope this helps 🙏
@@nicksaraev you’re the BEST. I haven’t seen this comment, I’m gonna check it out. Thanks!
Nick, you mentioned your introductory course - do you have a link for that?
It's right here on UA-cam!
ua-cam.com/video/PjKHs-L6Sn4/v-deo.html
Hope it helps 🙏
INCREDIBLE MAN. Thank You for your response and know that FOLLOW to the T! Ill keep you posted on progress.
@@nicksaraev
This is awesome. I had one similar but this filled in a couple of missing pieces for me. There needs to be more parameters for word count though. Mine was also double the word count. Have you figured this out? Potentially it needs to be incorporated in a formula before the second OpenAI module?
Unfortunately word count is something LLMs just don't do well. They have no conceptual understanding of what a "word" is and the autoregressive way they work (i.e generating token N+1 based on tokens N-X... N-1, etc) makes this impossible iirc. You can get reasonably close with a few additional procedural methods though.
1. Add a switch module that takes as input the desired wordcount and outputs an intro prompt for GPT-4.
2. If wordcount is between 0-1K, set prompt to something like "be succinct". If wordcount is 1K-2K, maybe no prompt. If desired wordcount is higher then set prompt to something like "be comprehensive" etc.
You could alternatively skip headings depending on projected word count as well, although I haven't found any of that necessary. Personally I just stick with whatever length it gives me, most SEO optimized pieces are on the longer end anyway. Hope this helps 🙏
at 20:00 what is the "Item 5" Role and Message Content? We can't see 👀
Nick - are you filtering the router -→ random 25% change sectionText somehow? how does it know if it's gone through the 25% chance first to then go back thru the Make flow
Hi Nick, can you share the JSON file for us? Would be of great help. Thanks!
if you take an hour and follow the video, you can legit build this from scratch - I did and works flawlessly
@@TheSheldonBernstein I know but, the JSON file makes it easier.
The open ai module has "Accessibility in Retail: How to Make Your Store More Accessible" hard coded
I figured out how to fix the hard coded variables from typeform, but I'm stuck on the chat Message 4. The entire markdown article from Nick's run is populated in this field. The article should be an output but it's hard-coded as an input. Stumped.
if anyone else is stuck - delete the hard coded user messages, the ones that accept variables from previous modules are also in the chat module and will take over.
Great stuff Nick! Quick question - at 13:10 in the video, when adding the folder ID to the google doc, I can only seem to add existing folders rather than the folder ID tag. Any ideas?
use the defaults to select the folder you want. Even though his screen shows manual - use the default to select the folder where you want the folder/files to be created
man, you are such a genius
Fact of life: people named Nic are disproportionately geniuses 😏
how did you put break above the gpt module?
Hey Jason! You can just drag and drop-should connect automatically.
PS make sure "store incomplete executions" is enabled in the scenario settings first! Here's a help doc: www.make.com/en/help/scenarios/incomplete-executions#:~:text=To%20enable%20it%2C%20enable%20the,to%20the%20Incomplete%20executions%20folder.
@@nicksaraev So this should fix the error: "Scenario was requested to STOP because MAXIMUM EXECUTION TIMEOUT [5 minutes] has elapsed", right?
Amazing! Would love to see more on how contenful is setup. :D
wow. I'll have to go through this a few times to really get it. But this is great. I wish the screen recording was a bit clearer though.
Thx for the feedback man
Great video. I'm running into a 2000 character limit in my notion database. Does google sheeets have the same limitation? any advise on how to work around the 2000 character limit?
The world of automation is run by iterators 🎉
Or at least in Make.
Awesome vid 🚀
Ps. Challenge: Do you know of a “free” way of adding in-text images into a Webflow Rich Text Field?
So upload images somewhere, get a public link (inc file extension) and then replacing the in-text placeholders.
Webflow only accepts public image URLs including file extension in the rich text field.
Discord works okay, but it started to “error out” on us. Not good when you have a backlog of 300 articles.
Anyway keep up the good momentum 💪
Does this also work in other languages?
Hey Nick, loving the content. I noticed this blueprint is not in the Notion link. Could you please upload it? Thanks a ton!
What is the name of the temple of your website
thanks. glad to find your channel!
Would be fun to see how we could make a frontend website for users that would want to use a service like this. Make a dashboard of sorts and have users be able to sign up to use a similar automation and have it display on the site instead.
Im trying to build this and bought the template but its a little hard to follow even through pausing it
Hey Nick - great video, it's awesome, thanks!
Got a question about the 25% split. From your video, I couldn't see where it was referencing the section text, it just had your example text. For us it's just generating an output from the example text itself, rather than using it just as a witing guide.
I can see that you have an item #5, but we can't see what's in it....
Hey Paul, thanks for the love man. I left this comment below but will paste it here for convenience too:
Message 5 is a User prompt and it's just the output of the previous GPT-4 module. The idea here is that, 25% of the time, I'm taking the generation and passing it through an additional step that "reformats" it into bullet points etc.
The exact variable is {{14.choices[].message.content}}. If you copy that into the Message Content field of Message 5 and replace "14" with whatever number your module is should pull correctly.
Hope this helps 🙏
@@nicksaraevThanks so much!
I've been building a somewhat similar dashboard product, but mine has a lot of other features like natural language editing, approval, automated research, etc. I'd love to trade notes.
The scenario works well but after running some AI detection tests, it's showing that most of these articles are 40%-50% detected as AI written. What can we do to lower the score whilst automating?
how do I put an arrow in there??
So I'm strugglin' a little here. Sometimes my final blog output is in html, and sometimes it's formatted correctly. What could be causing the varying results? I'll likely take this question to the FB group, but maybe someone here has already had/resolved this issue.
got a custom js error , any ideas?
Hi Nick. How do you use Break module connected to OpenAI - Create a Completion?
Hey Artem-you have to click on the gear icon at the bottom of the scenario builder (called "Scenario Settings") and then check "Allow Storing of Incomplete Executions". Then you can drag and drop the Break next to any module and it'll automatically connect. I use the defaults most times (15min x 3).
Hope this helps 🙏
right click on module - select ADD ERROR HANDLER - enable "Allow Storing of Incomplete Executions" - use the defaults as @nicksaraev suggests
Great stuff, Nick. In the example article, did you mean to have some of the text in Dutch? 🤔
Just found this channel are you able to use make with making any video content
Yes absolutely. My business partner and I experimented with this before around a year ago when AI voices were still iffy. We used screenshots from Reddit to build out a sort of "reddit comment" channel, and it ended up getting something like 20K views.
ua-cam.com/video/6UWtz6h9Gec/v-deo.html
You can do the same thing now with much better quality, and there's also a suite of "video AI" tools that let you generate visuals. If I had the time to do this again today I totally would, very fun.
Hi there, when the router hits the random number, it does run the top branch and creates a rewritten section that gets stored in sectionText, however I am struggling with how to configure Get sectionText to make sure it gets the updated sectionText and not the original sectionText?
Simply Amazing! I'm floored
gpt is not writing out any content for outline or articles. It only write the first word of the title or it just comes out with the word OUTLINE
i downloaded the blueprint and uploaded it to chatgpt and it gave me the answer... I didn't have enough tokens allocated in the gpt module.
around 23 minutes... not sure what should happen regarding contentful. is there a video for that? Appreciate you man. thanks.
I would love to see how you can build a UA-cam script with this type of workflow
Trying to build the scenario but I got stuck at step 1 - I cant figure how to set up the Typeform, Tools and Google sheets modules - the first 3. Then Nick starts with the 3rd module directly - can someone help. Thanks in Advance
@santi-leone could you help me since you were able to build the scenario. ?
What is the messge content for the "Produce Section Text" in item 5? I saw there's an item 5 and role is "User" but it didn't see on the screen of the message content. Can you comment here? Thank you very much.
Of course Jason! Message 5 is a User prompt and it's just the output of the previous GPT-4 module. The idea here is that, 25% of the time, I'm taking the generation and passing it through an additional step that "reformats" it into bullet points etc.
The exact variable is {{14.choices[].message.content}}. If you copy that into the Message Content field of Message 5 and replace "14" with whatever number your module is should pull correctly.
Hope this helps 🙏
Is anyone experiencing issues with the markdown not being processed at the end before generating the final document?
This is great and super valuable information and looking forward to checking out your other videos…this channel is going to be huge 🎉 … one question in the content produce GPT step could you please share what you have in prompt #5? Thanks in advance Nick!
Appreciate this! Yes, prompt #5 is the output of the previous GPT-4 module. I'm pasting a previous reply here so you don't have to search for it:
Message 5 is a User prompt and it's just the output of the previous GPT-4 module. The idea here is that, 25% of the time, I'm taking the generation and passing it through an additional step that "reformats" it into bullet points etc.
The exact variable is {{14.choices[].message.content}}. If you copy that into the Message Content field of Message 5 and replace "14" with whatever number your module is should pull correctly.
Hope this helps 🙏
thanks for that @@nicksaraev ! I just updated the field and the GPT module now produces this error message : ExecutionInterruptedError
Execution was FORCED to stop because MAXIMUM EXECUTION and OVERLAY TIMEOUT [10 minutes] had elapsed
Origin
Make
any idea why this happens?
Also, another question. I noticed that my output for the outline (And any long prompt on GPT completions) gets cut off. At first I thought it was due to tokens, so I stopped using gpt-4 turbo, and started using GPT-4 (The exact same model you're using in this video) but its still getting cut off. Any ideas? Thanks!
Hmm-could be token length? If you click under Advanced there'll be a setting to set maximum output tokens, I usually keep that high (2048) for content purposes.
@@nicksaraev You are a LIFE-SAVER! It was exactly that. Thanks a ton, man. You're putting out heavy gold through content.
Let me know when you start a course or something, I'd be interested :)
PS: Think you could upload this Blueprint please? Thanks a ton 🙏
Increase token count
when my markdown html gets uploaded to googledocs it doesn't make it into rich text, any ideas?
Hmm. Probably one of two things:
1. The model isn't outputting markdown "ATX" format (specific flavor of markdown I always get it to generate. You could be using a dumber endpoint like GPT-3.5-turbo, which sometimes struggles with this)
2. You're not using a "Markdown to HTML" module after the output or it's not set up correctly. Check "Github flavored markdown" and choose "No" under sanitize
Hope this helps 🙏
I found that the final markdown isn't needed. It uploads fine without it.
Thank you for responding. I will keep playing with it. Not sure why I can't get it to work. @@nicksaraev
Interesting, I will keep trying. It isn't working for me. @@MitchAsser
This is really useful in understanding the steps and overall workflow for this scenario. Thank you so much!
Where is the best starting point for understanding the Message Content that you've chosen for each role (System, User, Assistant), within each ChatGPT module? I noted your comment in this video that many don't take advantage of these message inputs. Do you have an overview of this at all? Or can you point to a good resource for this?
Appreciate you! I don't have a specific walkthrough unfortunately. But tbh prompt engineering is very nascent and still more art than science-I would just spend a couple of hours playing with a cheaper model like GPT-3.5-Turbo to build some intuition.
Rules of thumb: few-shot (2-3 user/assistant examples) almost always outperforms zero shot (no examples). Keep your prompt as short as possible. Play with temperature if you're outputting JSON-I generally stick to lower values for code and find it works better.
Hope this helps 🙏
@@nicksaraev Thanks for your latest video - 3 ChatGPT Prompt Engineering Hacks You NEED to Start Using
It definitely clarified how to best create and manage prompts.
Hello brother loved your idea, can I get the blueprint json of this
It’s just a wall of text. The content looks good but you need more variation of content, bullet points, lists, tables and quotes. This will then enhance your readers engagement and improve the ranking.
Awesome content! Could you please share the template?
if you take an hour and follow the video, you can legit build this from scratch - I did and works flawlessly
@Nick Saraev All your video are pure magic. What is you magic for adding images to the post when then go to wordpress. I have seen others do it but what is you method. Thank You
Please share this template.
if you take an hour and follow the video, you can legit build this from scratch - I did and works flawlessly
@@TheSheldonBernstein Hi, can you export the blueprint and give me a download link? It will be a great help as i a beginner in this.
so i tried spending hours and failed. :(
@@TheSheldonBernstein so i tried spending hours and failed. Can you help me giving me the blueprint file?
Great video Nick but a steep learning curve for me !
this is really good!
i get it, it's cool and a great idea - but i didn't really see any proof of you made money on this
really good work btw i like it
Using AI to create content is profitable! There are other effective AI content generators available if you're wanting to increase your revenue.
Not at all clear :(
I recommend going through his play list called Make money with Make. Then his breakdown of using make.
you are way too smart
While I make everyone goes crazy after tell them how money was make,insolation communication,digital slavery,chemical production marketing,biochemical weapon that changes human subject perspective behave..Hahaha what is the meaning of life because the interest was time for interface and the profit was attention..
♥♥
you need help with your prompt..the rest of it is good ..those intros that gpt does are terrible
Was thinking the same
hii