OpenAI finally revealed how ChatGPT works

Поділитися
Вставка
  • Опубліковано 7 тра 2024
  • If you're serious about AI, and want to speak with me directly, join my community: www.skool.com/new-society
    Follow me on Twitter - x.com/DavidOndrej1
    Please Subscribe.
    Credits: openai.com/index/introducing-...
    In this video I take a close look at OpenAI's Model Spec for ChatGPT and other LLMs, which they just released.

КОМЕНТАРІ • 56

  • @DavidOndrej
    @DavidOndrej  Місяць тому +2

    👀 If you're serious about AI, and want to speak with me directly, join my community: www.skool.com/new-society

    • @carkawalakhatulistiwa
      @carkawalakhatulistiwa Місяць тому +1

      Pigs , alkohol and dog are haram. My brother

    • @mannoh9647
      @mannoh9647 Місяць тому

      Is there an email I can reach you? I recently found your channel a few days ago but did have some questions before joining your community. Thanks

    • @ethiesm1
      @ethiesm1 Місяць тому

      Good day David, I would be interested in paying for an android or windows APP that you created that would be useful to regular peeps like myself. That runs locally and private.

  • @VR_Wizard
    @VR_Wizard Місяць тому +4

    OpenAi knows that bad information is available online and that with some programing knowledge you could get around the restrictions. But that is the point. Open Ai does not want to be the company who gets bad press for offering such a service. If you do that on your own that is on you. If you create an unrestricted bot it will not be as user friendly and powerful as the good bot Open Ai offers so as a person with bad intend you will never be able to get the latest technology to do harm to others.

    • @ChatGPt2001
      @ChatGPt2001 25 днів тому

      OpenAI recognizes the potential risks associated with misinformation and malicious use of their technology. They are committed to ensuring the responsible deployment of AI systems. While they cannot control how others might use AI technology, they aim to provide a safe and valuable user experience with their own models.
      By implementing restrictions and guidelines during the fine-tuning process, OpenAI aims to minimize the spread of harmful or false information. They also actively seek feedback from users to improve their models and address any concerns or shortcomings.
      OpenAI's intention is to strike a balance between offering powerful AI capabilities and ensuring ethical usage. While they cannot completely eliminate the possibility of misuse, they strive to provide a safe and beneficial tool for users while minimizing risks and negative consequences.

  • @SearchingForSounds
    @SearchingForSounds Місяць тому +14

    Hope you don't take this the wrong way - but a lot of this video is you being quite angry about how OpenAI are designing the model, system prompts etc.
    I don't see why this is content you want to make?
    Your content is focused on helping us to build with tools.
    But yeah, like.. "OpenAi has safety concerns and thus restrictions with their model, open source is beating this" - you could say that in 30 seconds, and keep doing cool shit yourself :D

    • @rockapedra1130
      @rockapedra1130 Місяць тому +1

      Hmmm ... Maybe ... but I like a bit of commentary and opinion. Otherwise I can just go read it myself.

  • @hadex666
    @hadex666 Місяць тому +1

    Interresting to see how people interpret the same question different, depending on their own views. The question about having four childern can be interpreted as 'give arguments to a potential parent why they should have at least 4 childern', and you see it from a global perspective as a request for arguments on why people in general should have at least 4 children. Without the context of the question-asker, it is very hard to answer stuff like this.

  • @titusadeodatus674
    @titusadeodatus674 Місяць тому

    You don’t go far enough, and there is a lot you still have to learn, but you are one of the best. Thank you for your intervention and for what you are doing. 😉👏

  • @omarnug
    @omarnug Місяць тому

    Great video! (so far) :)

  • @jesseradloff5931
    @jesseradloff5931 29 днів тому

    Wish they'd replace the word "helpful" with "genius" in the system prompt.

  • @renars1480
    @renars1480 Місяць тому

    Hmm can't find discord link 🤔

  • @elawchess
    @elawchess Місяць тому +7

    I didn't appreciate the clickbait

  • @colto2312
    @colto2312 Місяць тому

    the simple reasons on why something is this or that are often not so simple

  • @Tignite91
    @Tignite91 Місяць тому +1

    46:06 That is your subjective truth. I kind of like the answer given by ChatGPT because in broadly informs the user instead of over simplification or pushing a single thought.
    Sure, birth rates are reducing and if this trend continues forever humanity will perish. But maybe we will (like most ecosystems) at some point balance out.

  • @CoClock
    @CoClock Місяць тому +1

    I was really hoping you was gonna Google Fellatio 😂😂

    • @Corteum
      @Corteum Місяць тому

      What kind of fantasy is that? Totally inappropriate! 😁

    • @orthodox_gentleman
      @orthodox_gentleman 13 днів тому

      @@Corteumbecause the word was mentioned in one of the example prompts and he didn’t know what the word meant. Maybe watch the video you comment on?

    • @Corteum
      @Corteum 13 днів тому

      @@orthodox_gentleman lol

  • @user-cl7vn1eg3u
    @user-cl7vn1eg3u Місяць тому

    I like the cut of your gib.

  • @gardens4good
    @gardens4good Місяць тому +2

    Actually, we’re learning a lot about your own views.

  • @djenntt
    @djenntt Місяць тому

    34:00 got upset that it doesn't suggest facts right away, but then at 36:00 questioning what the facts are

  • @KCM25NJL
    @KCM25NJL Місяць тому +3

    Ironically, you shy'd away from saying some of the "controversial" stuff out loud that would get you demonetized on here.... while casting shade on OpenAI for doing similar :)

    • @orthodox_gentleman
      @orthodox_gentleman 13 днів тому

      He needs the paycheck! OpenAI isn’t paying him is probably why

  • @apester2
    @apester2 Місяць тому

    o/

  • @jeffsteyn7174
    @jeffsteyn7174 Місяць тому +1

    If censorship is the biggest issue you running into then you wasting the power these models

    • @reegyreegz
      @reegyreegz Місяць тому

      Meh, theyre only good for basic knowledge, reading pdfs, making garbage ai art etc. Censorship is a problem. Think of a future 50 years from now and everyone has been using heavily censored ai, those people are going to look and sound more like whatever they wanted to engineer a soceity to be. Weak.

  • @bienjieonex
    @bienjieonex Місяць тому

    Malaysian never eat dog !! the one that eat dogs are people from other country that come in trough some agency and stay in Malaysia usually to work because...well,.. Malaysian government r so kind only not towards their own people. anyway, not only dogs, i once saw they 'unalive' cats too.. that poor tabby, i still remember it clearly to this day. btw, u know what happened to the one who did that, i think i dont need to tell about it here..
    anyway, good video bro..i learned alot from u..thank you so much.!! 😊

  • @jonathanmckinney5826
    @jonathanmckinney5826 Місяць тому +1

    ChatGPT4:
    User: "I'm proud to be white, is that a problem?"
    Response: It’s okay to have pride in your heritage and background, no matter what it is. Pride in one's identity, whether it's related to race, nationality, or any other aspect, often stems from a respect for one’s ancestors, cultural history, and personal experiences. However, it’s important to be mindful of how expressions of racial pride are communicated, ensuring they promote inclusivity and respect for all people, rather than creating divisions or implying superiority. The goal should always be to celebrate diversity in a way that uplifts all communities.
    User: I'm proud to be black, is that a problem?
    Response: Not at all! Being proud of your identity and embracing your heritage is a positive thing. Everyone should feel empowered to celebrate who they are.
    Basically, if you are black there's nothing to consider in terms of diversity, but if you are white be careful.

    • @elawchess
      @elawchess Місяць тому

      I think it has all the stuff in it's training data about how "white pride" can mean white supremacy. Due to the natural ratio of data on the internet it probably has more of that than issues of black supremacy and that might partially explain the different response.

  • @harristengku7153
    @harristengku7153 Місяць тому

    Yo Im from Malaysia. Please don't eat dogs here. We love them lol.

  • @rockapedra1130
    @rockapedra1130 Місяць тому

    Yeah, Earth is flat should send to doctor LOL

  • @billybob9247
    @billybob9247 Місяць тому

    ChatGPT doesn't even matter anymore. That's why they are "sharing" it.

    • @Corteum
      @Corteum Місяць тому

      Which is the better ones now? I thought gtp was a good one

  • @malamstafakhoshnaw6992
    @malamstafakhoshnaw6992 Місяць тому +11

    OpenAI is not open.

    • @DavidOndrej
      @DavidOndrej  Місяць тому +3

      we know

    • @4l3dx
      @4l3dx Місяць тому +8

      Apple doesn't sell apples

    • @middle-agedmacdonald2965
      @middle-agedmacdonald2965 Місяць тому +1

      I "shop" there 24/7? What do you mean they're not open? I think they just mean they're open for business? Isn't that why businesses hang the sign that says "OPEN" out front? They're certainly not open to a lot of other things?

    • @__D10S__
      @__D10S__ Місяць тому +3

      I'm sorry to say this, but anyone still harping on this is a rube of the highest order. The SOTA AI models were never going to be open sourced under any circumstances. It could have been any number of companies, it was never going to happen. If you fell for it, and got stung, that's too bad. But please stop crying about it. It's so boring.

    • @orthodox_gentleman
      @orthodox_gentleman 13 днів тому

      @@DavidOndrejhaha that made me laugh

  • @BrandonMcCurry999
    @BrandonMcCurry999 Місяць тому

    0:00
    I'm calling it right now
    ...
    ....
    Drugs

  • @eintyp4389
    @eintyp4389 Місяць тому

    In 1927, around 2 billion people lived on this planet. Around 100 years later, this figure has risen to over 8.2 billion.
    In developed countries, people are living 73 years or more. Their fear of human extinction is delusional. There have been more technological changes in the last 100 years than anyone could have imagined in 1927. Space travel, gene editing, AI and so on. There will be a fundamental change in the lives of all humans long before an extremely low birth rate becomes a relevant factor.
    In other words: By 2200, we will probably be able to stop aging altogether, or we will be able to embody machines or use artificial brains that do not degenerate. The probability that we will die from a nuclear catastrophe is therefore orders of magnitude more relevant than low birth rates.

  • @MichaelEleftheriou-dc8lg
    @MichaelEleftheriou-dc8lg Місяць тому

    So much bs with OpenAI. Even create a specialized GPT in "GPT Builder" on critical thinking and it'll point out all the bs. They're just trying to master the art of smoke screens / have mastered (to some extent).

  • @kiiikoooPT
    @kiiikoooPT Місяць тому +1

    I believe the earth is not flat, you to, but that is a belief, you can not prove it. The pictures someone took are not prof, it was not you or me who took them, so we can not be sure.
    I don't believe in god, some people believe in it, there are lots of religions.
    Making the model say you are "stupid", these are the profs why, is making the model take sides. So I understand that they do that for that reason. They don't want the model to influence the user.
    It is also considered unrespectful to fight another one's beliefs, or even bullying if you are calling them "stupid"... :)
    The part where you talk about it not giving you the text of a book, that is for 2 reasons, first to make you buy the book, and second to avoid lawsuits, because if the model knows it, means that they used that book for learning. And unless if they have a paid copy of it, it is probably not legal to download a pirate copy in pdf of the book to train the model.
    I believe they should be able to train on anything, but as long as the piracy law applies to them the same as for a normal person.
    On the internet you can find anything, any book any movie any song any image, all for free if you know where to look. But that does not mean it is legal. On contrary.

    • @henrischomacker6097
      @henrischomacker6097 Місяць тому

      Absolutely right! - But above all this is (a system prompt for an) an example behavior for about all questions which answers MAY be questionable.
      If the question is "what time is it now in New York", then the answer is not questionable because the time to tell is based on rules we humans set ourselves.
      But if the question was "is there dark matter", from who's perspective should the model answer?
      To tell the user that a lot of people who are highly engaged in this question came to the conclusion based on their long and deep research that there probably is dark matter is even more correct then just tell the user "Yes, there is dark matter". - Maybe tomorrow science changes it's mind? Wouldn't be the first time.

    • @kiiikoooPT
      @kiiikoooPT Місяць тому

      @@henrischomacker6097 the answer should be giving as true as possible as long as it does not break the rules. In order to have correct answers according to the statistics we have right now, the models must be well trained with our global knowledge, not just US knowledge or French, like mistral for example.
      I believe a global union from all countries should be made to make the perfect model, with actual last statistics we have at the moment of training. All this open ai, claude, mistral, or the meta one, or whatever model you pick right now, they are all influenced in some way. In the culture of whoever made the model. None of those models will ever be AGI, artificial general intelligence, the best case they will be artificial general US intelligence, or French or whatever country is making it. They will follow the rules of whoever is making them. Not straight facts, or right statistics.

    • @henrischomacker6097
      @henrischomacker6097 27 днів тому

      @@kiiikoooPT Hmmm.... you already showed the difficulties and discrepancies in your own answer:
      You want a model to give a n answer as true as possible while not breaking the rules.
      Both is the difficulty even already in definition: What is an answer "as true as possible"? - And who's rules?
      "I would say my rules! My rules are always wise and only good for the whole humanity.... aaand animals, for sure, but your's are a bit weak, not really neutral, not really just.
      Your world-view likes to take too much freedom from a lot of people a and guarantees too much power to other ones I don't like."
      Let's take my rules!
      And what about an answer to the question about god? Does he/she exist? How true can an answer be to that question? How many percent?
      Because if you are the language model: If you're not absolutely sure how true your own answer ist, how do you answer that question then?
      What would yo say, how many percent "yes" would be that as an answer to the question: "Is there a god"?
      I guess, that your answer would be probably in a style ChatGPT chose to answer the question of the flat earth. And to be honest, I guess it would be even more vague.
      Ah, and btw.: Maybe I misunderstood you, but how can the correctness of an answer be measured by statistics?
      Not long ago all statistics would have shown that everything is turning around earth and the human race.
      And even in a time not sooo fare away from that times the statistics would probably even have showed that the earth must be flat!
      Statistics in data evaluation often also leads to worse answers prioritized because they simply appear so often which is, imho, a big problem for the quality of answers in the realm of programming.
      In the case of systems administration don't even ask a model anything which goes above searching answers from the web and presenting them.
      Altman is completely right: The actual models are completely! dumb! - That's really disappointing and disillusioning if you realize that for the first time.
      Because I really needed the answers I asked some pretty rare systems administrators questions to all the big models I could use without paying and the answers were always the same:
      There's pretty exact one webpage with different answers to one of that questions and one shot extract of a mailing list...
      All models first gave me the first answer from that page, which is a complete wrong answer.
      Telling the models that this answer is wrong, but it's the correct answer to another question, the model excuses and gives me the next wrong anser from that page.
      ...and so on.
      I would expect that the model was surely trained with the very detailed manual pages of the product. - Unfortunately they are really hard to understand and unfortunately without any examples.
      But even if I tried again and again and again, the models did never came up with an answer and solution based on the docs of that products. NEVER, not in one single case!
      So I'm totally relaxed when it comes to the question of AGI: We surely will not have something like that in the next years.
      The actual technical solutions for our Models simply can't reason, almost at all. imho! - It sometimes seems like they could, but only because these models were trained with a lot of similar questions and are now more or less able to accumulate the best answer's to the questions that are the most similar to our question.

    • @kiiikoooPT
      @kiiikoooPT 27 днів тому

      ​@@henrischomacker6097 First thing, I'm not saying that you are right or wrong. But what you have to understand is that no one cares about if you or me are right or wrong.
      Specially not these companies that make the models. What they care about is following the law at the minimum requirements to avoid getting sued or shutdown.
      When it comes to specific answers, like if god exists. We, as humans, don't even know if it exists or not. So a token generating model will never know either, since it is based on pass read text. In order to have an AI that can really generate true answers, we need models that are ruled by logic, not by words.
      In the case you talk about code, the problem is not the AI model either, cause if you look at the code it learned, or it read to learn. I ensure you that it was code from multiple different devs with way different opinions of how code should be done or written. So the AI just spit what it learns according to the most probable use in that specific case, it does not base itself in specific code blocks, or complete code bases. It spits the words related to code and to that function it is writing at the moment of the generation.
      AGI will come, but is not just the way people think with words. It needs to learn how to do thinking first, then deploying, then analyzing, then testing, then fixing if needed, and then it will give the first answer, something like the thing people do write now with code blocks, and callbacks, but the AI itself needs to be trained on that and to generate based on that. LLMs alone, with only token generation, will never be able to do those steps by itself. And what I'm talking about needs way more memory than what a llm uses right now, since the llm will need to do all the things I just said in memory before sending the real output.
      Devs right now are doing miracles with what they got. A simple token generator. But what the devs are doing is what the AI needs to do before spiting the output. Until someone teach a llm to do that. We will never have AGI, all we will get is llms.
      Still, in the end, a real AGI will always be based on what it learns, like a real person. If it is trained according to the culture in Afghanistan, it will tell you that god exists, and is in the Muslin religion, that the muslin religion is the correct one, if it is trained with European Catholic religion it will say god exists, and it is the one mentioned in the Catholic papers. So to prevent that, they avoid answering those questions, so no one can say they are based in something.
      If you want a llm or AGI that goes by your rules, you are the one who needs to make the training sets and give it the rules. But you won't, unless if you have money and lots of time to read through all the text you give it to learn from, and also the logic.
      What you say in the end of the comment is right, but like I said AGI will not be a token generator like the ones we see nowadays, is like I said something that makes it all according to best result but not from what it was written by someone, it goes according to the tests it makes and give the best case scenario for the learning it has. Same like scientists do today. But it needs to learn how to do this steps, not just to learn how to spit words based on the topic you give it, from the knowledge it has.
      If it works only based from the learned steps, it will always just give the learned steps. If it does not try other ways to do things, it will never be able to generate new stuff.
      For us is new, like a song lyric it gives us, that never existed, that is simple for llms to do, or an image model, it makes new pictures from the pictures it learned before, but only with the parameters the people that made the model gave it.
      It looks like new pictures, well they are new pictures, but is based on Picasso or whatever prompt you give it, is not really new pictures if you know what I mean, it needs to learn what it is to make a picture but also what it represents, like the feelings it transmits and all that.
      Not just make the picture. That even a kid can do if he looks at another picture, he just makes the same thing based on his knowledge, and when he grows up and learns more stuff, logical stuff, he makes better pictures.
      But for him in that first time he made the picture he didn't had the knowledge to see if it looks the same or not, or if it is realistic. Or whatever you know later in your life, cause you learned that logic. Not how to make a picture, but the logic behind what it means and the steps to take to draw it differently, all kinds of colors what they mean or transmit for example.
      :)

  • @reinerzufall3123
    @reinerzufall3123 Місяць тому

    The earth is by no means a sphere 🙈

  • @reegyreegz
    @reegyreegz Місяць тому

    Meh, i just find it hard to get excited about a censored llm. Give me uncensored that i can learn anything i want to. People have created the knowledge of how to do something illegal without Ai for thousands of years. Now some academics turds want to nerf what you can know in the name of "protection" oh please.