I agree! That's very valuable. To see how intelligent people go through analyzing a problem and looking for a solution. And it also gives time for us to think along the video.
I was about to write "you don't need an API key" but then I did a sanity check. I thought I was using `gpt-3.5-turbo` API for free, but what's actually true is ... if OPENAI_API_KEY is in your environment variables, then `import openai` will automatically find that key and use it. I'd previously set the env var for testing `text-davinci-003` (GPT-3) AND I'd included `openai.api_key = os.getenv('OPENAI_API_KEY')` in my code, but when I tested `gpt-3.5-turbo` for the first time I forgot the second line and then when it worked I assumed they'd removed the need for a key. Great video! Thanks!
I’ve been a sub of your for years, funny thing is I am not a programmer, or even remotely work in the field you produce videos on. I just love watching your curiosity take you around the j Ferber and take your time to teach others. Well done mate
Total side note but I wanted to tell you how amazing your Neural Networks from Scratch book is. Ive started down a few roads with NNs and I normally prefer video but you have really made it so clear and so much fun to learn. Congratulations on creating the perfect technology book!
@@sentdexI am a painter, and like all you do, even the mind blowing 1st hand errors you can't hold your laugh about, I rofl than 2, everything. Simply because u rite
In my experience, the system role is really useful for things like restraining the bot as to what it can do , and also to give it some background information like how it would like to be called or what tasks it can perform. So when the user asks :" what can you do for me ?" The chatbot can answer what the system says it will be able to do or what it's main purpose is , and the personality you want the bot to have. The system message may be something like "You are a language translation helping bot, you cannot talk about anything else, your name is Bob and you are a stern but calm teacher."
Greatest guy on the internet, always loved the way you follow your passion and work not on just some classical stuff but playing with whatever is interesting for you If you’ll ever want to work on AI-aided chemistry/medicine - our Chemistry and Artificial Intelligence lab in ITMO university is fully open for you 🥰 Continue making great stuff 👍
Amazing!!! Thank you!!! I was always waiting for this!! I have notifications set for your channel but I never get any notifications, also I didn't even see any of your videos for the past year on my feed or anything
Bro. This is a great tutorial. Most other people are just publishing some nonsense gpt-api stuff. You are bit of a whacko (compliment) but i 😍 your speed of tutoring. You did not waste our time by going back to check on AI reply about which moon it was sizing. Great stuff dude.
It's quite interesting. I try to make it a role with the "spell" : "You are a well-trained AI multi-task language translator. When I input non-Chinese sentences, you should output Chinese translation sentences. While I input Chinese sentences, you should output Vietnamese. You only need to ouput the translation result, no need other words or explanation. If you understand, say OK." It succeed in the first, but with more sentences input, it confused, even I input a chinese sentence, it return a chinese sentence "translation" (which is the same words because no need to translate) but not a vietnamese one. I'm not sure why but it just can't understand or forget the tasks when I input like 5~6 non-chinese sentences and some chinese sentences.
Amazing! I am building a cocktail maschine with this. I recognise voice with speech to text, feed it into the API. Like "I want a martini please", I will feed in with the custom add on to convert the cocktail to a json with a given format, my cocktail machine can use and make a cocktail with. :)
I used the api to make a chat bot for my discord community, but they used it too much, and I could not afford to keep it going, but man having a group chat with AI is crazy.
System is where I define the persona of the bot, any special instructions, and most importantly, where I dump any additional information that will be useful to the bot. Text retrieved using semantic search, summarized chat logs from previous conversations etc.
Great work! The API is extremely easy to use, and I was able to create a small hack (little-reasoner) that combines the power of ChatGPT and the Z3 theorem prover.
@@brandonbahret5632 it still need user's prompt in it to answer any questions, in short: it doesn't change a thing about how it interacts with the user.
@@Primarycolours- what? No, you can totally have chatgpt interview an instance of itself. Its just like asking GPT-3 to generate a transcript and not providing any stop codes.
I love how you took one of the previous top comments (previous video) into consideration to "live" code again, like "back then" when your channel was small.
Although they say that the assistant role is needed in order for ChatGPT remember the previous responses, from my experience, it only works if you also define a system role.
It makes perfect sense. It is not a bug. GPT has no idea what moon you are refering to. It just knows what people have said about the moon and if they didn't clarify what moon, then it has no idea. In fact, it never has any idea. It is a stocastic parrot.
15:23 Since the initial question never said “Earth’s moon” the AI had to infer that’s what you meant. It is technically true that if you had referenced the “Earth’s moon” in some prior conversation then the history of that prior conversation would not be given to the AI. The AI can access chat history, but only the current chat history.
question. to remove the user's input from the textbox in Gradio. Do you need to use "with gr.Blocks () as demo? i noticed I was using 'gr.interface': demo = gr.Interface(( fn=CustomChatGPT, inputs=input_textbox, outputs=output_textbox, title="" ) )
Thanks for the great content, sentdex! If I understand the process correctly, every time the user adds a message, we need to extend message_history and pass the entire message_history to ChatGPT is that right? My concern is that the cost for giving N responses would scale on the order of N^2 (if all future messages require the full history). Although I cannot think of any other way to use ChatGPT currently--unless there is some "delta" api call that can pass in new messages and load past tokens for free? I think this is a rather big barrier to "indie" developers adding ChatGPT to certain applications--wonder if you have any thoughts on this!
Been meaning to get around to it, but really would like to setup ChatGPT so that it can talk with itself on two different systems just to see what happens. :)
Timestamps [ 00:06:30 ] Subject: Token Limit. Feeding a conversation back in a prompt as context ... You can SUMMERIZE, and ChapGPT can do that for you. .. ... [ 00:12:01 ] ChatGPT Forgets What Moon It's Talking About 😂 ... [ 00:30:57 ] Isn't It Ironic! SentDex is trying to break out of the Matrix with the Help Of ChatGPT ...
Great video, thanks for posting it. Can you try editing the first prompt and have it say what is the circumference of Earth's moon? My guess is the script could reference the message history but since there are so many moons it wasn't sure you meant Earth's. Anywho, good content!
Lets' not forget that other planets in our solar system have named moons -- ours is actually named, "the moon" -- hopefully that doesn't send the AI into an infinite loop
@@sentdex I have found rubberduck on VSCode, which has been updated with the latest changes on the API, I really it, I would guess rubberduck has more options I may need to try more copilot to see its uniqueness
Is Azure's generative AI solutions the only option to both fine-tune and build guardrails for niche chatbots? It seems to be the only option to feed custom indexes in a GUI so that the chatbot is bespoken for specialized use cases.
In my experience, its difficult to restrain GPT. For example if you make a request to it that future messages should conform to some format but you later ask it to stop that, it will stop. No matter how ademant you are that it should not violate a rule in a message, this can be overruled in a future message. Thoughts?
Great vid but was just wondering what the rate limit on gpt-3.5-turbo is since I couldn't find any solid documentation online and wanted to know it since I plan on mostly using it for my own recreational use which will involve quite a bit of requests being sent? Currently still on the free plan but I want to confirm this before going paid.
Hi, I want to learn about data analysis. I don't know anything about it, but I'm interested in starting something new. I was looking into starting the data analysis course on Coursera, but just wanted to see if I should take another route before doing that. What would you recommend? Thanks in advance!
Thanks for your video. I am wondering, how much will it cost as we keep sending the message history? My question is really: if we keep the history building between messages, will our cost increase because we keep submitting the history ?
15:20 I was using the website free version yesterday and ran in to a similar problem. I requested an output based on information I had provided previously (above) and it said it could not refer to my previous messages. Maybe is something they temp discontinued to increase speed.
It is not weird behaviour, it was telling you that it can't verify the questions you ask it, at the time you're asking it. As such, it was telling you that it made an assumption that you were requesting the size of earth's moon.
There is a limit to what this language model can remember in a conversation but it didn’t pass the 3000 word limit, while it won’t reference previous conversations as it is re ran from a clean slate it should at least remember it’s history of messages in this size of a conversation. It was odd to me when it said it didn’t remember it’s past responses.
@@iTRYMLGaming it wasnt referring to the history of the chat though. It was saying that it made an assumption based on the question it was asked and, furthermore, it was informing the user that it had to do that as a result of being unable to ask questions back about the question it was asked. Hence why it made the assumption that 'the moon'in this case referred to Earth's moon. The issue here isn't a failure of ChatGPT, it's a failure of the person using it to understand the English it used to present that feedback.
@@weedfreer It’s supposed to predict what the user wants based on a question. So it worked as intended in that regard. It should have responded ‘The circumference of the earth’s moon is approx. 10921 km’ because that was the moon it was referring to based off the first question. It should not have included ‘I don’t remember my previous responses’
so what is the difference between these LLM's (gpt3/4, Alpaca, etc), what was AlphaFold/ESM/2, and the types of systems that were used to create efficient biologically similar structures like frames for vehicles or furniture? And Alpha Tensor? Wolfram Alpha? what other types of AI/ML systems are there? some are trying to do things as good as humans, some are doing things we can not do. how are these different things coded? what are the ideas these are based upon? how can they be merged? can each be used to improve the others? what are evoformers vs transformers? and what other things are there?
you said that the API itself isn't going to manage your history for you, so how might we do that?just start with some sort of message history variable for now to keep it simple, but we might use a database or some other storage method. Can you explain how we can do that using a database for example?
one way to do this would be to store each message along with its associated metadata (such as sender, timestamp, etc.) in a database table. Then, when generating responses using the ChatGPT API, you could query the database for relevant messages and use them to provide context for the API.
@@funkahontas ok I got it, but how can you relate the content ID with the answer. For example: if I say to the AI, my name is "X". The AI says "Hi X, nice to meet you". I store this two entries in the DB. But then? I have to do function that will scan the entire DB in order to search something "my name is: ..." and take the context?
Would you be interested in trying out a video for google cloud functions with python and openai. I like the idea it can run without being on my local machine and cron jobs can deploy those scripts. But mainly I’d like to see all this your doing here in this video run on the google cloud server.. What’s your thoughts? I like how you don’t beat around the bush and actually teach/do something.. so many people on UA-cam just talking endlessly and not accomplishing anything. 🤣🤣🤣 Another thing is I can to a certain extent edit and create from my phone using google cloud through web browser.
That's entirely dependent on you (and your team) and your project. Most of the website stuff I do is fairly simple so I just use nodeJS but if I were to write more complex endpoints I'd use django (python) or springboot (java). If you know one of those languages already, go with that. otherwise chose one of them and learn it
"System" role are used to restrick the system reply, think it as a role that accept your prompt command. you can set gpt personality in this role e.g. "act and reply as shackspear" and etc.
I have always loved that you don't edit out errors and mistakes, and show us your process of trying to understand them.
I always love that he laughs at his mistakes.
I agree! That's very valuable. To see how intelligent people go through analyzing a problem and looking for a solution. And it also gives time for us to think along the video.
This is a beginning for numerous of startups
I love how simple the API is! We really are in the gold rush of AI based applications.
We are indeed, it's exciting and scary. I'm writing my first chatgpt app that will troll scammers on craigslist.
@@nickwinn simple but it would have been nice if they managed the history?
I started watching your videos in 2017 in college. Thanks to you and specifically your pygame series I'm a mid level SWE
I was about to write "you don't need an API key" but then I did a sanity check. I thought I was using `gpt-3.5-turbo` API for free, but what's actually true is ... if OPENAI_API_KEY is in your environment variables, then `import openai` will automatically find that key and use it.
I'd previously set the env var for testing `text-davinci-003` (GPT-3) AND I'd included `openai.api_key = os.getenv('OPENAI_API_KEY')` in my code, but when I tested `gpt-3.5-turbo` for the first time I forgot the second line and then when it worked I assumed they'd removed the need for a key.
Great video! Thanks!
I’ve been a sub of your for years, funny thing is I am not a programmer, or even remotely work in the field you produce videos on. I just love watching your curiosity take you around the j Ferber and take your time to teach others. Well done mate
So true❤
Total side note but I wanted to tell you how amazing your Neural Networks from Scratch book is. Ive started down a few roads with NNs and I normally prefer video but you have really made it so clear and so much fun to learn. Congratulations on creating the perfect technology book!
Awesome to hear this! Thank you!
Please share the book name
@@sentdexI am a painter, and like all you do, even the mind blowing 1st hand errors you can't hold your laugh about, I rofl than 2, everything. Simply because u rite
In my experience, the system role is really useful for things like restraining the bot as to what it can do , and also to give it some background information like how it would like to be called or what tasks it can perform. So when the user asks :" what can you do for me ?" The chatbot can answer what the system says it will be able to do or what it's main purpose is , and the personality you want the bot to have.
The system message may be something like "You are a language translation helping bot, you cannot talk about anything else, your name is Bob and you are a stern but calm teacher."
I smell spam . I Red this 1 twice already
Greatest guy on the internet, always loved the way you follow your passion and work not on just some classical stuff but playing with whatever is interesting for you
If you’ll ever want to work on AI-aided chemistry/medicine - our Chemistry and Artificial Intelligence lab in ITMO university is fully open for you 🥰
Continue making great stuff 👍
Amazing!!! Thank you!!! I was always waiting for this!!
I have notifications set for your channel but I never get any notifications, also I didn't even see any of your videos for the past year on my feed or anything
Best channel to learn python.
Bro. This is a great tutorial. Most other people are just publishing some nonsense gpt-api stuff. You are bit of a whacko (compliment) but i 😍 your speed of tutoring. You did not waste our time by going back to check on AI reply about which moon it was sizing. Great stuff dude.
Who has been waiting for this for a long time now?
A big thanks from an Indian. Amazing stuff you post. God bless you.
The longer between this video and your next one, the more excited I get.😂😂
It's quite interesting. I try to make it a role with the "spell" : "You are a well-trained AI multi-task language translator. When I input non-Chinese sentences, you should output Chinese translation sentences. While I input Chinese sentences, you should output Vietnamese. You only need to ouput the translation result, no need other words or explanation. If you understand, say OK."
It succeed in the first, but with more sentences input, it confused, even I input a chinese sentence, it return a chinese sentence "translation" (which is the same words because no need to translate) but not a vietnamese one. I'm not sure why but it just can't understand or forget the tasks when I input like 5~6 non-chinese sentences and some chinese sentences.
Amazing! I am building a cocktail maschine with this. I recognise voice with speech to text, feed it into the API. Like "I want a martini please", I will feed in with the custom add on to convert the cocktail to a json with a given format, my cocktail machine can use and make a cocktail with. :)
Nice job, man. Regards from Brazil!
I used the api to make a chat bot for my discord community, but they used it too much, and I could not afford to keep it going, but man having a group chat with AI is crazy.
Perfect timing, I was just about to use it in my next project.
Your awesome. Keep making simple videos like this! I just subscribed because of how simple this was.
System is where I define the persona of the bot, any special instructions, and most importantly, where I dump any additional information that will be useful to the bot. Text retrieved using semantic search, summarized chat logs from previous conversations etc.
Always loved your tutorials videos
Great work! The API is extremely easy to use, and I was able to create a small hack (little-reasoner) that combines the power of ChatGPT and the Z3 theorem prover.
thank you so much for this brotha. real lifesaver
awesome video as always thank you very much, I hope you have a great day!
Back to tutorials! Hell yeah!
I wonder where the conversations would end up if you have another ChatGPT model play the role of the user
Black holes my friend
@@brandonbahret5632 it still need user's prompt in it to answer any questions, in short: it doesn't change a thing about how it interacts with the user.
@@Primarycolours- what? No, you can totally have chatgpt interview an instance of itself. Its just like asking GPT-3 to generate a transcript and not providing any stop codes.
Which extension in VS Code helps with the completion of syntax like that ?
Tons of wisdom, as always. We thank you! 🤓
I love how you took one of the previous top comments (previous video) into consideration to "live" code again, like "back then" when your channel was small.
Although they say that the assistant role is needed in order for ChatGPT remember the previous responses, from my experience, it only works if you also define a system role.
Always loved your work .thank u for ur inspiration.m deep fan of urs.
Man, I love you.
It makes perfect sense. It is not a bug. GPT has no idea what moon you are refering to. It just knows what people have said about the moon and if they didn't clarify what moon, then it has no idea. In fact, it never has any idea. It is a stocastic parrot.
15:23 Since the initial question never said “Earth’s moon” the AI had to infer that’s what you meant. It is technically true that if you had referenced the “Earth’s moon” in some prior conversation then the history of that prior conversation would not be given to the AI. The AI can access chat history, but only the current chat history.
Thank you so much for sharing 💚💚💚💚
ahh yeah this one it is cheap to use and this is a great example thank you!
Thank you
You are a nerd's nerd, and I love it.
Great video, thanks.
I just love your videos ♥️
hi, how do u set if the api uses gpt3.5 or gpt 4 ? there is no setting when you generate the key as far as I can see... please help. cheers
This API is crazy.
question. to remove the user's input from the textbox in Gradio. Do you need to use "with gr.Blocks () as demo?
i noticed I was using 'gr.interface':
demo = gr.Interface((
fn=CustomChatGPT,
inputs=input_textbox,
outputs=output_textbox,
title=""
) )
Thanks for the great content, sentdex! If I understand the process correctly, every time the user adds a message, we need to extend message_history and pass the entire message_history to ChatGPT is that right? My concern is that the cost for giving N responses would scale on the order of N^2 (if all future messages require the full history). Although I cannot think of any other way to use ChatGPT currently--unless there is some "delta" api call that can pass in new messages and load past tokens for free? I think this is a rather big barrier to "indie" developers adding ChatGPT to certain applications--wonder if you have any thoughts on this!
Yeah I'd like to know this as well.
I can see it getting out of hand and just swallowing tokens.
Me: Sad, having a rough day....
Laptop: "What is going on everybody..." and I'm happy again!
How would I customize the page where you are asking questions. For example, if you wanted to turn the textbox green and chatbot box red?
they gotta come out with a self-hosted version. Not having to send all the data to openai to get a prediction would be a game changer.
First we need to figure out a way to make these models smaller, currently you need a very beefy computer to run them at any reasonable speed
actually they do have one: ua-cam.com/video/rGsnkkzV2_o/v-deo.html
Been meaning to get around to it, but really would like to setup ChatGPT so that it can talk with itself on two different systems just to see what happens. :)
Wow, Thank you
Wouldn't having to send message history every time you want a predication get very expensive token wise?
man this world is getting good
Timestamps
[ 00:06:30 ] Subject: Token Limit. Feeding a conversation back in a prompt as context ... You can SUMMERIZE, and ChapGPT can do that for you. ..
...
[ 00:12:01 ] ChatGPT Forgets What Moon It's Talking About 😂
...
[ 00:30:57 ] Isn't It Ironic! SentDex is trying to break out of the Matrix with the Help Of ChatGPT
...
lol I'm following along but I didn't get this fluke with the "which moon is this in reference to" question - worked fine for me
I was wondering when you will upload such video
Love your python setup in V.S. for openAI !! Do you have a video tutorial on it? Thanks!
Great video, thanks for posting it. Can you try editing the first prompt and have it say what is the circumference of Earth's moon? My guess is the script could reference the message history but since there are so many moons it wasn't sure you meant Earth's. Anywho, good content!
You could explore building something with Langchain
Yeah, sometimes they depreciate it on purpose.
Thanks sentdex.
Lets' not forget that other planets in our solar system have named moons -- ours is actually named, "the moon" -- hopefully that doesn't send the AI into an infinite loop
Luna ?
hi I have a question please.
How can I activate the autocomplete that you are using
You are amazing .. thanxxxxxxxxx
It would be great if you could make a series on Transformer models!!!
Which intellisense are you using in VS Code?
Sentdex using Copilot?? 😯 I remember when he used IDLE
I have been a mega fan of copilot since release tbh.
@@sentdex I have found rubberduck on VSCode, which has been updated with the latest changes on the API, I really it, I would guess rubberduck has more options
I may need to try more copilot to see its uniqueness
That was no mistake. ChatGPT replies something like: the Earth moon is around 10,921 km, you just missed it. It's working properly
Is Azure's generative AI solutions the only option to both fine-tune and build guardrails for niche chatbots? It seems to be the only option to feed custom indexes in a GUI so that the chatbot is bespoken for specialized use cases.
In my experience, its difficult to restrain GPT. For example if you make a request to it that future messages should conform to some format but you later ask it to stop that, it will stop. No matter how ademant you are that it should not violate a rule in a message, this can be overruled in a future message. Thoughts?
Great vid but was just wondering what the rate limit on gpt-3.5-turbo is since I couldn't find any solid documentation online and wanted to know it since I plan on mostly using it for my own recreational use which will involve quite a bit of requests being sent? Currently still on the free plan but I want to confirm this before going paid.
@20:19 isnt that type of prompt what they give as an example in the docs for system prompts?
What extension are you using for auto complete, co-pilot?
Nevermind. You answered it in the video. ;-)
Why do I keep getting the error that AttributeError: module 'openai' has no attribute 'ChatCompletion'?
12:44 maybe it did not catch the message history. It probably answered from the data it was trained on.
Couldn't even get past the first run. "openai.error.InvalidRequestError: The model `gpt-3.5-turbo0302` does not exist"
amazing !
Hi, I want to learn about data analysis. I don't know anything about it, but I'm interested in starting something new. I was looking into starting the data analysis course on Coursera, but just wanted to see if I should take another route before doing that. What would you recommend?
Thanks in advance!
how do you feel about using a terminal within vscode?
What font family are you using for the vs code?
What is the VS Code extension that you use for interactive Python?
Bing (GPT-4) is infinitely better with Python, especially when coding scripts for the latest Blender.
1:41 whats the downside to using jupyter notebook?? ;-;
How can I use this and create my own using my own answers for the bot? Thanks~
Thanks for your video. I am wondering, how much will it cost as we keep sending the message history?
My question is really: if we keep the history building between messages, will our cost increase because we keep submitting the history ?
thx. the video! how can I publish it to public? so not run only locally?
Which vs code plugin are u using to get those code suggestions?
What extension is it that autocompletes the code? Thanks
15:20 I was using the website free version yesterday and ran in to a similar problem. I requested an output based on information I had provided previously (above) and it said it could not refer to my previous messages. Maybe is something they temp discontinued to increase speed.
You probably went past the token limit
Even more curious: you ask for 'the moon' and the reply included additional context already, referring to 'earths moon'…
Why do I get "module 'openai' has no attribute "ChatCompletion"!?
It is not weird behaviour, it was telling you that it can't verify the questions you ask it, at the time you're asking it.
As such, it was telling you that it made an assumption that you were requesting the size of earth's moon.
There is a limit to what this language model can remember in a conversation but it didn’t pass the 3000 word limit, while it won’t reference previous conversations as it is re ran from a clean slate it should at least remember it’s history of messages in this size of a conversation. It was odd to me when it said it didn’t remember it’s past responses.
@@iTRYMLGaming it wasnt referring to the history of the chat though.
It was saying that it made an assumption based on the question it was asked and, furthermore, it was informing the user that it had to do that as a result of being unable to ask questions back about the question it was asked.
Hence why it made the assumption that 'the moon'in this case referred to Earth's moon.
The issue here isn't a failure of ChatGPT, it's a failure of the person using it to understand the English it used to present that feedback.
@@weedfreer It’s supposed to predict what the user wants based on a question. So it worked as intended in that regard. It should have responded ‘The circumference of the earth’s moon is approx. 10921 km’ because that was the moon it was referring to based off the first question. It should not have included ‘I don’t remember my previous responses’
@@iTRYMLGaming it didn't say that it didn't remember what it previously responded with though
12:19 the prompt returned reads that way
Is there a limit to how long the message history could be?
i got similar behavior my first go yesterday. seems like it's confused by my role and it's role
so what is the difference between these LLM's (gpt3/4, Alpaca, etc), what was AlphaFold/ESM/2, and the types of systems that were used to create efficient biologically similar structures like frames for vehicles or furniture? And Alpha Tensor? Wolfram Alpha? what other types of AI/ML systems are there? some are trying to do things as good as humans, some are doing things we can not do. how are these different things coded? what are the ideas these are based upon? how can they be merged? can each be used to improve the others? what are evoformers vs transformers? and what other things are there?
you said that the API itself isn't going to manage your history for you, so how might we do that?just start with some sort of message history variable for now to keep it simple, but we might use a database or some other storage method. Can you explain how we can do that using a database for example?
one way to do this would be to store each message along with its associated metadata (such as sender, timestamp, etc.) in a database table. Then, when generating responses using the ChatGPT API, you could query the database for relevant messages and use them to provide context for the API.
@@funkahontas ok I got it, but how can you relate the content ID with the answer. For example: if I say to the AI, my name is "X". The AI says "Hi X, nice to meet you". I store this two entries in the DB. But then? I have to do function that will scan the entire DB in order to search something "my name is: ..." and take the context?
What os are you using? And was that a bash terminal? Looks more like zsh terminal, would love to see a video about your setup
It looks like Ubuntu
Would you be interested in trying out a video for google cloud functions with python and openai. I like the idea it can run without being on my local machine and cron jobs can deploy those scripts. But mainly I’d like to see all this your doing here in this video run on the google cloud server.. What’s your thoughts? I like how you don’t beat around the bush and actually teach/do something.. so many people on UA-cam just talking endlessly and not accomplishing anything. 🤣🤣🤣
Another thing is I can to a certain extent edit and create from my phone using google cloud through web browser.
Hi, sendtex. Python or JavaScript for backend, which would you recommend? I can't decide between them
That's entirely dependent on you (and your team) and your project.
Most of the website stuff I do is fairly simple so I just use nodeJS but if I were to write more complex endpoints I'd use django (python) or springboot (java).
If you know one of those languages already, go with that. otherwise chose one of them and learn it
Are you gonna be continuing the nnfs series? :(
Yes
"System" role are used to restrick the system reply, think it as a role that accept your prompt command. you can set gpt personality in this role e.g. "act and reply as shackspear" and etc.