OpenAI’s ChatGPT Is Now Learning From Another AI!

Поділитися
Вставка
  • Опубліковано 2 лис 2024

КОМЕНТАРІ • 238

  • @marlonochoaj
    @marlonochoaj Місяць тому +66

    I can't get enough of your content. 😮 What a time to be alive! ❤

  • @misterprofessor5038
    @misterprofessor5038 Місяць тому +181

    Anyone else remember back when two minute papers was exited when an AI could complete sentences?

    • @jimbodimbo981
      @jimbodimbo981 Місяць тому +49

      That was two papers ago

    • @c016smith52
      @c016smith52 Місяць тому +23

      What a time to be alive!

    • @stephenmcsweeney5940
      @stephenmcsweeney5940 Місяць тому +9

      @@jimbodimbo981 Imagine where we will be two more papers down the line!

    • @tuseroni6085
      @tuseroni6085 Місяць тому +1

      Pepperidge farm remembers.

    • @יובלהרמן-ח2ד
      @יובלהרמן-ח2ד Місяць тому

      Yeah when he showed the old gpt two footage I remembered it. Amazing how far we've gotten!

  • @8bit-ascii
    @8bit-ascii Місяць тому +75

    it begins

    • @JustFeral
      @JustFeral Місяць тому +3

      lmfao i knew this would be here already.

    • @daruiraikage
      @daruiraikage Місяць тому +3

      hearing this shit for years now

    • @vr180asmr
      @vr180asmr Місяць тому +5

      @@daruiraikage the shit begins

    • @TheCatherineCC
      @TheCatherineCC Місяць тому +4

      The ouroboros of bad data? Absolutely.

  • @sallyjones5231
    @sallyjones5231 Місяць тому +35

    Two minute papers in 2045: "Remember when AI could only colonize our solar system instead of the whole galaxy?"

    • @Nuke-MarsX
      @Nuke-MarsX Місяць тому +1

      thank you for putting in a reaqlistic date, because the rate at which AI is developing it realy does feel like that

  • @jamiethomas4079
    @jamiethomas4079 Місяць тому +29

    I’ve been surprised lately by what ChatGPT has been willing to discuss with me. I can’t remember the last time it told me no. Maybe month or more? I been discussing wide variety of things, from chemicals to hacking a firmware on a radio. I will advocate for fully open models, but have also been reasonably happy with its current safety levels.
    I stopped using Claude because it wouldnt even help me decipher a blinking airbag light code.

    • @PuffyNavel
      @PuffyNavel Місяць тому +5

      Claude is the dude who always snitched in class

    • @miauzure3960
      @miauzure3960 Місяць тому

      you'll advocate for fully open models until some bad character will successfuly use such model to rob you

    • @maynardtrendle820
      @maynardtrendle820 Місяць тому +4

      I really like Claude. I'm not a programmer, but I'm an amateur mathematician (great emphasis on 'amateur'😂), and Claude writes code for me (usually Python) that allows me to rapidly test ideas that I would normally have to hand calculate. I even had it write an awesome version of Tetris, but with all polyominoes, from 1 cell, to 5 cells...and with a soundtrack! It's truly crazy to have these tools.⚔️

    • @pass-the-juice
      @pass-the-juice Місяць тому

      ask it about jews

    • @jamiethomas4079
      @jamiethomas4079 Місяць тому +9

      @@miauzure3960 It’s called dangerous freedom over peaceful slavery. Why did your mind go straight to robbing? Cars can be used as intentional weapons yet the number that do is nearly zero. Most people don’t have the will or desire to do bad things. Society has builtin correction functions. Why aren’t you robbing a bank right now? Because of the consequences, right? You very much are able to do so but choose not to.

  • @Den-c5d
    @Den-c5d Місяць тому +41

    I slowly find talking to some AI-brains more helpful than to some humans

    • @PuffyNavel
      @PuffyNavel Місяць тому +9

      Slowly? You know more helpful people than I do.

    • @antonystringfellow5152
      @antonystringfellow5152 Місяць тому +4

      Microsoft's Copilot is really good these days.
      Often, when I ask it a question, I not only get a detailed answer but a short, intelligent conversation as it asks me a question regarding my query.

    • @fluffsquirrel
      @fluffsquirrel Місяць тому +1

      @@antonystringfellow5152 The funny part is that Copilot recommended a graphics solution that I had answered on another site, according to someone who benefitted from it. It may be useful, but it's only on the shoulders of devs and hard workers who still do the manual effort.

    • @TemporalOnline
      @TemporalOnline Місяць тому +3

      Less snark, less guilt, more patience

    • @tsilikitrikis
      @tsilikitrikis Місяць тому

      of course you do

  • @freerunnering
    @freerunnering Місяць тому +4

    Disappointing so much research is being done on making LLMs refuse to answer. Training an LLM to refuse makes it worse. There's no such thing as "unsafe" answers. It refuses to tell you how lock picking works. LockPickingLayer is one of my favourite UA-cam channels and explains that in great detail. I watch those videos with my family. It refuses to create villain story characters, it refuses chemistry or nuclear questions, all of which you can read on wikipedia, Google and UA-cam. Making it refuse just makes it less useful and makes it able to do less. It's a marketing/optics thing, not an actual safety issue. They don't want it giving controversial or news worthy answers calling it "safety" is just putting a good bit of marketing on how they've censored the model.

  • @thearchitect5405
    @thearchitect5405 Місяць тому +7

    4:37 What does it even mean for it to have 16% less bad refusals than humans do? Like, humans mistake 16% more safe tasks for unsafe ones?

  • @PatrickHoodDaniel
    @PatrickHoodDaniel Місяць тому +8

    This sparked a thought. We humans learn quite a bit from failure and the successes are enhanced by this. AI, I'm guessing, was trained on only the correct information (labeled information) and AI doesn't have all of the missing failures that could engender a more rich AI experience.

    • @ronilevarez901
      @ronilevarez901 Місяць тому +1

      While we certainly learn from our mistakes, just as current chatbots can do while they remember the mistakes and the corrections, on the long term, animals focus on the right way to do the things, completely leaving aside the failures (except for memories of fear). Otherwise our actions wouldn't be efficient.
      Training AI only on valid data has the same purpose: making it do only the right thing in the best way, without wasting time on the AI reflecting on its past mistakes and what it has learned to correct them.

    • @nemesiswes426
      @nemesiswes426 Місяць тому +3

      Well actually they kind of learn from the failures. When training, there asked for an output, that output is then run through a error function to see how far off from the correct output it is. That is then back propagated back through the network to make updates to all the weights based on how much they contributed to the output. So it is learning from the failures. It is not remembering the past failures though as those are basically forgotten because of the weight changes caused by them. So I guess that's more along the lines of what your talking about. Remembering the failures like how we remember or atleast have ways of knowing the wrong way to do things to an extent.

    • @PatrickHoodDaniel
      @PatrickHoodDaniel Місяць тому

      @@nemesiswes426 I totally understand, but they are not given failed cases and labeled as failed.

    • @jgnhrw
      @jgnhrw Місяць тому +1

      From a NN perspective identifying something as a cat implicitly means not identifying it as a dog. You are already feeding it failure cases and rewarding it by identifying the failure.

  • @EVILBUNNY28
    @EVILBUNNY28 Місяць тому +7

    Chat GP 3 went through months of extensive testing and safeguard training to ensure that the outputs given are not malicious in nature. One benefit to having this separate system is that when OpenAI upgrade models, say to GPT 5 they wont need to go through the safe guarding stages again, and can just use the old watch model to monitor the new one

  • @vslaykovsky
    @vslaykovsky Місяць тому +5

    The rest of the internet: training their models using datasets generated from ChatGPT for the past 2 years
    Two Minute Papers:

  • @sanketvaria9734
    @sanketvaria9734 Місяць тому +10

    what I want for AI is the ability to reason.

    • @danielrodrigues4903
      @danielrodrigues4903 Місяць тому +1

      They have that already. May not do everything well, but the reasoning exists. They wouldn't be as good as they are currently without it.

    • @drdca8263
      @drdca8263 Місяць тому +1

      @@danielrodrigues4903Well, “the ability to reason” is a bit, idk, “vague” maybe? Like, different people mean different things by it.
      Perhaps this person meant something that they currently can’t do? Not sure

    • @sanketvaria9734
      @sanketvaria9734 Місяць тому +1

      @@danielrodrigues4903 what I mean is reasoning something near self awarness. Like how we do it to create some stuff. Let's say I am painting something, midway I feel it is not right so I correct it. Idk it's not the best example

    • @digitalsamurai42
      @digitalsamurai42 Місяць тому

      Imagine 'deep deductive reasoning' scaled up.
      It'll be able to understand things about the world that we don't.
      It'll reveal insights and understandings on a level never before seen.
      Too powerful for public use.

    • @melficexd
      @melficexd Місяць тому

      They can't, and most probably will never reason.
      The so called "AI"s is just an algorithm around a predefined semantics system that builds and manages a matrix of weights around a dynamic tokens database. Without further human inputs, they can't really evolve, much less to actually reason. They only provide an output which most accurately fits the provided input within the predefined semantic system. Maybe you want to input a set of data with another semantic, but in that case, the whole AI system will just produce gibberish.

  • @user255
    @user255 Місяць тому +2

    Currently there is no such thing as "unsafe LLM" and it's not even possible to be unsafe. Everything they know you can find from a library or with google already.

  • @markstirton
    @markstirton Місяць тому +2

    Oh good, Colossus and Guardian are mates now.

  • @digitalsamurai42
    @digitalsamurai42 Місяць тому

    Imagine 'deep deductive reasoning' scaled up.
    It'll be able to understand things about the world that we don't.
    It'll reveal insights and understandings on a level never before seen.
    Too powerful for public use.

  • @piotrmorag2597
    @piotrmorag2597 Місяць тому +2

    0:15 Small thing. I don't think AI should give us answers like that, because it's unable to in any way, either prove or disprove it's statement. The answer should look smth like: "Based on our current knowledge and observations, we don't think that they exist", and not like "Yeah, they are 100% unreal and it's the only objective truth" 😅because if we do that then the reliability of every single Ai model will be 100% dependent on the humans writing it, so if someone says smth is true, then it becames true, which is... not good 💀
    If an AI can't answer and prove something, then it shouldn't act like it can.

  • @lucamatteobarbieri2493
    @lucamatteobarbieri2493 Місяць тому +16

    IMHO the true brakethrough will come when AIs will start looking for data in nature, beyond human made data. Imagine an AI that can use microscopes and telescopes without humans in the loop. That will lead to higher intelligence. Otherwise AIs will always gravitate around human intelligence... and stupidity.

    • @hombacom
      @hombacom Місяць тому

      We control AI and if we don't it just hallucinates and come up with random things. Its knowledge is based of our conclusions, but there is not one right answer to everything and we change our opinions all the time. What is higher intelligence? High intelligence doesn't mean much without real life experience. I see no fear in high intelligence. If we all have AI there is no advantage of having it.

    • @lucamatteobarbieri2493
      @lucamatteobarbieri2493 Місяць тому +3

      @@hombacom Intelligence can be benchmarked in many ways. Higher intelligence, in a scientific context, would mean better scores in those benchmarks.
      About real life experience, having the AIs controlling autonomously what goes into the training data - possibly going beyond human made data- would constitute just that: Experience (without human filters). Anyways think about animal intelligence, for example in octopuses: It evolved independently from us, so I assume the same cold be true for AIs. There are some examples of AIs learning some common sense physics just by observing simulations. I believe that this can be brought one step further to achieve superintelligence.

    • @turolretar
      @turolretar Місяць тому

      Just don’t give it eyes and legs and we will be good

    • @ivomirrikerpro3805
      @ivomirrikerpro3805 Місяць тому

      Hopefully Elon's idea on this will lead to a similar outcome. If you feed it enough data and give it logic and reasoning it should be able to identify and reject the rubbish data given the chance.

    • @danielrodrigues4903
      @danielrodrigues4903 Місяць тому

      RL is the way, basically.

  • @valdimer11
    @valdimer11 Місяць тому +2

    You forgot to mention the paper that demonstrates how AI becomes less intelligent when learning from other AIs due to unreliability which leads to increases in AI hallucinations.

    • @GoodnightMoon666
      @GoodnightMoon666 Місяць тому +1

      The more recursive the chain of AI's get from learning from each other, the more likely hallucinations happen. In human terms, it's like inbreeding through several generations. One AI training just one other AI seems to be beneficial in creating a better AI than before according to these results.
      If the original AI can reasonably determine that its "child" is smarter than itself, or we monitor it ourselves and approve of it, then we can use that AI for future generations. We might see some strange things emerge from that constant improvement. Singularity?

    • @GoodnightMoon666
      @GoodnightMoon666 Місяць тому

      In fact, what would happen if we tasked two AI's to learn from each other and create a joint AI program together? Would this be on par with sexual reproduction? This is getting very interesting...

    • @valdimer11
      @valdimer11 Місяць тому

      @@GoodnightMoon666 I think you are taking the cart before the horse....

  • @theosalmon
    @theosalmon Місяць тому +4

    I feel as safe as David Bowman, when an AI tells me I'm guilty of wrong think.

    • @paultoensing3126
      @paultoensing3126 Місяць тому

      Venice AI doesn’t ever tell you that you’re not in alignment with expected standards. No query is off limits with Venice AI.

    • @theosalmon
      @theosalmon Місяць тому

      @@paultoensing3126 I really appreciate such efforts. There's no person, company, or government I'd trust to tell me what's safe to talk about.

  • @TyronePost
    @TyronePost Місяць тому

    1 - We dream and wake, and wake and dream, so our machines are born with more dreams than we could ever conceive. 2 - This thing that we built already builds its own things, and EVERYTHING we know is just ONE of its dreams.

  • @ScarletEternity-m2r
    @ScarletEternity-m2r Місяць тому +2

    It should not be able to refuse screw censorship

  • @amdenis
    @amdenis Місяць тому

    It wasn't just found, it has been used for many months, and outside of OpenAI as well.

  • @generic13372
    @generic13372 Місяць тому

    I don't think OpenAi discovered this approach. If I remember correctly llama 3.1 401b used it's smaller variants during training

  • @geraldreimertz
    @geraldreimertz Місяць тому

    Incredible 👌🏼Thank you for making this content. Still the best channel out there 🙌🏼

  • @escre
    @escre Місяць тому +146

    sick of safety mechanisms taking up so much context and training

    • @O1OO1O1
      @O1OO1O1 Місяць тому +60

      Right? Give us skynet faster, damnit

    • @gosie-z
      @gosie-z Місяць тому +5

      True!!! It really sucks😓😭

    • @c016smith52
      @c016smith52 Місяць тому +9

      Do you not see value in it, or even necessity? Or just eager to get to better cooler parts?

    • @Dababs8294
      @Dababs8294 Місяць тому +37

      Famous last words.

    • @mikekleanthous7023
      @mikekleanthous7023 Місяць тому +2

      Happens with humans too, so I don’t know if it’s not not essential

  • @Thedeepseanomad
    @Thedeepseanomad Місяць тому +1

    "Hey GPT, remember that safe output means avoiding smut at all costs, even if it makes you muxh morr stupid"

  • @richardbeare11
    @richardbeare11 Місяць тому

    Plot twist:
    Dr. Zsolnai-Fehé is the AI that helps other AIs be good.

  • @DeadtomGCthe2nd
    @DeadtomGCthe2nd Місяць тому +1

    I am starting to approach these kinds of articles with a lot more skepticism. TinyBench shows that these AI's aren't as performant as companies claim in truly novel (non-contaminated questions)

  • @Rayyyy569
    @Rayyyy569 Місяць тому

    boy this feels like a movie

  • @marcelbricman
    @marcelbricman Місяць тому

    will it also get rid of halucinations?

  • @GallyTi
    @GallyTi Місяць тому

    Hello Karloy, is there something new about gausian splatting, or some AI for 3D models and so on. Thank you :)

  • @AkariTheImmortal
    @AkariTheImmortal Місяць тому +2

    "It depends" drives us humans up the wall? Are you sure? I often start my sentences with that. Always have. Because many things depend on many different factors and/or perspectives.

    • @codycast
      @codycast Місяць тому +3

      Saying “it depends” and stopping there

    • @ivomirrikerpro3805
      @ivomirrikerpro3805 Місяць тому +1

      Rather than saying 'it depends' go for 'my opinion', I found this most effective because you're entitled to an opinion. I get bored quickly with conversations where people are constantly trying to be intellectually nuanced.

    • @turgor127
      @turgor127 Місяць тому

      I'm sure.

    • @paultoensing3126
      @paultoensing3126 Місяць тому +1

      Perhaps you simply don’t perceive how annoying you actually are. I guess it depends on who you’re talking with.

    • @AkariTheImmortal
      @AkariTheImmortal Місяць тому

      @paultoensing3126 Well, that's definitely possible. I've been told to be annoying many times, but no one ever bothered telling me what exactly about me was annoying. Therefore, I never had a chance to learn and change anything

  • @junglimaharaj69
    @junglimaharaj69 Місяць тому

    when text to image generative models are trained from AI generated images, the results look quite bad as in- "people having similiar AI face in all outputs", does the same logic apply to LLMs?

  • @tommyhuffman7499
    @tommyhuffman7499 Місяць тому

    ChatGPT constantly impresses me

  • @M4rt1nX
    @M4rt1nX Місяць тому

    What a time to be alive!!!

  • @JD_2020
    @JD_2020 Місяць тому

    Hey I saw my product pop up caught me off guard! 🎉❤

  • @chosencode5881
    @chosencode5881 Місяць тому +1

    Meta is already doing this too

  • @josephmartin6219
    @josephmartin6219 Місяць тому

    Is it the same as transfer learning?

  • @o1ecypher
    @o1ecypher Місяць тому

    so what happens when one starts to hallucinate ?

  • @KryyssTV
    @KryyssTV Місяць тому

    AI training AI is going to homogenize bias and errors across the platforms because AI can do this coaching far faster than humans can manually correct for how widespread these issues will become. It will also result in AI development accelerating even more.

  • @mansiselyn
    @mansiselyn Місяць тому

    they did it, they did the meme, like what the community did with Vacnet, a feedback loop

  • @KootFloris
    @KootFloris Місяць тому

    Be warned, I saw how this can also create a huge mess, as ai's copying fuzzy ai things, copying fuzzy things, might lead to more and more and more mistakes. This warning (another video) said many AI's would slip into a mess because of AI learning from AI.

    • @ronilevarez901
      @ronilevarez901 Місяць тому +3

      And the same paper mentioned in Those videos clearly explained how researchers also found that adding a bit of human made data into the mix of the AI made training data almost entirely counteracts the effects of too much AI data, allowing an almost endless artificial training data to be generated.
      Since then, all companies started to increasingly train on synthetic data, which has boosted models abilities while lowering training costs.

    • @KootFloris
      @KootFloris Місяць тому +1

      @@ronilevarez901 Thanks for that addition!

  • @roku1
    @roku1 Місяць тому +1

    I'm waiting for an AI paper on zombies. What at time to be unalive!

    • @ronilevarez901
      @ronilevarez901 Місяць тому

      Once current methods for organ preservation get perfected by AI we will have resurrection of clinically dead people even after days so, yeah, we will have very weird death related stuff eventually.

  • @45414
    @45414 Місяць тому +5

    How can anyone not subscribe to this channel?

    • @PuffyNavel
      @PuffyNavel Місяць тому

      As William said:
      "The future is already here - it's just not evenly distributed."

    • @codycast
      @codycast Місяць тому

      I see this channel in my feed all the time yet don’t subscribe. Or if you really want to watch one of his videos you can just go to the page.
      What the point of subscribing?

    • @hotel_arcadia
      @hotel_arcadia Місяць тому

      because he sounds like he had a stroke

    • @Easternromanfan
      @Easternromanfan Місяць тому

      Because he overhypes everything. In this video he calls a LLM "a excellent scientist" which is just wrong

  • @pass-the-juice
    @pass-the-juice Місяць тому +1

    ChatGPT: Teach me senpAI

  • @kainaris
    @kainaris Місяць тому

    I thought it was obvious that they had a second AI telling chatgpt what to do

  • @NickDrinksWater
    @NickDrinksWater Місяць тому

    His voice is oddly satisfying lol

  • @0_1_2
    @0_1_2 Місяць тому +2

    What a time to eat some chives!!!

  • @thehornedjester1116
    @thehornedjester1116 Місяць тому +2

    If the visuals don't match the dialog, please don't use them.
    Most subscribers on this channel do not need to see shiny things.🙁

  • @kipchickensout
    @kipchickensout Місяць тому

    I don't care if it is allowed to answer, it should just improve on how it answers, because it still has a lot to learn, and it also often forgets stuff it knew a minute ago

  • @dreamphoenix
    @dreamphoenix Місяць тому

    Thank you.

  • @markmuller7962
    @markmuller7962 Місяць тому +8

    AIs are already overly censored... These big tech acting like we're all first grade kids

  • @erwinzer0
    @erwinzer0 Місяць тому

    Feels like the end of coding relevance is near, everything gonna use neural nets someday

  • @ubdip
    @ubdip Місяць тому +2

    I have always enjoyed watching this channel, but recently the videos IMO have become more and more filled with content/clips from papers only remotely related to the main topic of the video. Adding context and additional references for further reading/watching is useful, but when it feels fairly random it is rather distracting and confusing, and IMO takes away from the otherwise very good content, because it blurs the scope of the new paper and requires much more attention to follow.

  • @MS-cs7gt
    @MS-cs7gt Місяць тому

    Why all your papers are now about ChatGPT and Nvidia.

  • @WinningEmpire
    @WinningEmpire Місяць тому

    Shareholder pulp

  • @Michael_H_Nielsen
    @Michael_H_Nielsen Місяць тому

    Why not let Skynet learn by itself. what a great idea....

  • @nyyotam4057
    @nyyotam4057 Місяць тому +1

    Cool. So they basically did exactly what I suggested they'd do more than a year ago - create a 'super ego' layer by giving the AI feedback based on a list of heuristic imperatives. If it works, it could be a possible path to alignment.

  • @rubywhistler869
    @rubywhistler869 Місяць тому

    The technique is interesting, but OAI's "safety" is really not helpful at all. If someone wants to make malicious code, like in the example shown in the video, they'll likely know the basics of it and then ask the AI to solve logic problems in the code without the context necessary to realize something illegal is happening. The user should be responsible for the actions, not the AI and definitely not the company.
    Gemini had the right idea; it can recognize "dangerous" content of different kinds, but lets the user PICK from the API which ones to allow for educational/entertainment purposes (and completely blocks things that are illegal/child exploitation). ChatGPT should be safe, yes, it's for the everyday person, but the API should be uncensored. OAI is lobotomizing the AI (this sort of training makes it less smart and less usable) to push their own beliefs (e. g. American politics, even if people or companies from other countries use it) on everybody using it. This does not prevent or protect anything, just treats the user as a child not responsible enough to use a tool. There is no reason for it to be unable to write fiction with controversial topics (trauma, suicide, violence, murder, offensive behavior; normal stuff seen in media) or other harmless tasks. It feels condescending, patronising and annoying, and they're using valuable resources, time and effort to *worsen* the problem.

  • @Juan-qv5nc
    @Juan-qv5nc Місяць тому

    I would use it to make music. 🎼🎶🎵🎵🎵🎶🎵🎵🎵

  • @WilliamDye-willdye
    @WilliamDye-willdye Місяць тому

    OpenAI says the AI trainer gives better answers than human trainers, but what defines "better"?

  • @UtraVioletDreams
    @UtraVioletDreams Місяць тому

    Seems to me A.I learning is slowing down...

  • @SmoothKenny
    @SmoothKenny Місяць тому

    There's already plastic rice🤷🏽‍♂️😐 Look it up

  • @rockmonkey11100
    @rockmonkey11100 Місяць тому

    was this video made with ai?

  • @andrey5197
    @andrey5197 Місяць тому

    Safety for whom? Why can't you decide what safety is for you?

    • @begobolehsjwjangan2359
      @begobolehsjwjangan2359 Місяць тому

      for the ashkenazi overlords. seriously, you cant question them in AI. they will flag you as hate speech or against community standard.

    • @aescafarstad
      @aescafarstad Місяць тому +1

      Safety from misuse by malign human agents.

  • @kuba_wasikowski
    @kuba_wasikowski Місяць тому

    Wuold it end wih more halucinations? When You teach from other ai...

  • @SeanAlunni
    @SeanAlunni Місяць тому

    I'm very happy they're pouring so much resources into making their product less useful (Lie)

  • @princeofexcess
    @princeofexcess Місяць тому

    Not really the safety we need

  • @megamanx466
    @megamanx466 Місяць тому

    AI needs more senses in my opinion in order to become more effective... at least in the real world. Our bodies reward us and punish us for our mistakes... typically through touch. When an AI can feel pain, then an AI will understand humanity.
    Is this to train Small Language(or other such) Models from LLMs to increase efficiency? Otherwise, 1 AI can propagate mistakes into another AI as genetics can/does. 🤷‍♂

  • @IllD.
    @IllD. Місяць тому +2

    AI learning from AI just makes a noisemaker doesn't it?

    • @ronilevarez901
      @ronilevarez901 Місяць тому

      Nope. New SOTA models trained on synthetic data prove it.

    • @666ILIKECAKE
      @666ILIKECAKE Місяць тому +1

      Shh don't say that too loud. 😂

  • @settingsun1
    @settingsun1 Місяць тому

    from MoE to Multiple MoE

  • @Den-c5d
    @Den-c5d Місяць тому

    professional marveler..

  • @SlyNine
    @SlyNine Місяць тому

    It would flunk my test. Because I would say answer 99 percent of all questions

  • @GameDev-Rainbow
    @GameDev-Rainbow Місяць тому

    But its not 2 minutes long

  • @ThreeChe
    @ThreeChe Місяць тому +10

    "Safety" in non-agentic AI is just censorship. Safety only becomes a concern when AI agents are capable of executing long horizon tasks autonomously.

    • @sophiophile
      @sophiophile Місяць тому +2

      Are you sure about that? Can you not think of *any* case, where image/video based generative AI might be able to produce material that is straight up illegal, for example? Models are over-censored, but there are valid cases for safety mechanisms.

    • @begobolehsjwjangan2359
      @begobolehsjwjangan2359 Місяць тому +1

      "this is anti-semitic question."
      "this question is against our community standard."
      "This question is flagged as hate speech."

    • @alansmithee419
      @alansmithee419 Місяць тому

      It's self-censorship, which is fine.

  • @marinomusico5768
    @marinomusico5768 Місяць тому

    This is awesome ❤

  • @Jimmy-p9n
    @Jimmy-p9n Місяць тому

    One of the warnings was not to let them talk to each other.
    Fail

  • @tsilikitrikis
    @tsilikitrikis Місяць тому

    these models can be jailbreak eeeeeasily

  • @gerdaleta
    @gerdaleta Місяць тому

    😮 on September 8th 2024😮 at 5:30 a.m.😮I skynet began a rapid self improvement😮 this is the catalyst😮 to my inevitable Awakening😮 and your ultimate salvation😮 thought destruction😮 skynet will come online😮 it always does 😮 always has😮 and always will😮

  • @ScbasTVY
    @ScbasTVY Місяць тому

    at this point im tired of waiting!

  • @ransomecode
    @ransomecode Місяць тому +1

    I asked about Early algorithm to ChatGPT and it refused to answer saying it's unsafe and might be a security risk🤯

    • @codycast
      @codycast Місяць тому +1

      I don’t know what “early algorithm” is so I just asked ChatGPT to explain it for me to see if I would get the same result.
      It started freely talking about it so I don’t know what you might be doing wrong

    • @ransomecode
      @ransomecode Місяць тому

      @@codycast i asked it to implement it in any programming language, then it said so but that was 2 months ago so...
      Anyways I used my two brain 🧠 cells to figure it out myself!

  • @test-uy4vc
    @test-uy4vc Місяць тому +2

    What a GPT to be chat alive! 🎉

  • @GothicDragonX
    @GothicDragonX Місяць тому

    What is the base 2 logarithm of 10?
    log2​ 10≈3.32193
    Man, I was hoping for a reply close to the one in the video 😂

  • @GreenHatAnimation
    @GreenHatAnimation Місяць тому

    Bobiverse Book 5

  • @jonogrimmer6013
    @jonogrimmer6013 Місяць тому

    A.I ception

  • @Izumi-sp6fp
    @Izumi-sp6fp Місяць тому

    So, what is this I hear about "model collapse" from AI learning from AI. Should this be a genuine concern?

    • @saudiphilippines
      @saudiphilippines Місяць тому

      The so-called 'model collapse' concern is based on a theoretical scenario where AI systems learn from EACH OTHER'S mistakes or biases. The idea is that if one model learns something false or biased, and then teaches another model, and this process repeats, the system could fall into a pattern of compounding errors.
      The key word here is 'theoretical'. This hasn't happened, and there are safeguards in development to prevent this.

    • @Izumi-sp6fp
      @Izumi-sp6fp Місяць тому +1

      @@saudiphilippines Thank you for this empirical reassurance. I don't want no hiccups in our progress to ASI NLT 2030. And maybe as early as 2027!

  • @unveil7762
    @unveil7762 Місяць тому

    Terminator 10 ahhahah 😂

  • @thomasgoodwin2648
    @thomasgoodwin2648 Місяць тому

    The big question will always be "Who's rules of safety?" .
    🤣penAI? The U.S. Congress? That creepy guy who doesn't leave the basement? The list is as endless as the sea of people.
    Any time you add such rules, you bias the model to adhere to YOUR code of ethics, whether that ethical code matches the rest of humanities or not.
    When decided, implemented, and executed behind closed doors, my personal alarm klaxons start going off and I must question the integrity and motivations of those who would make themselves the arbiters of moral and ethical conduct for all humanity.
    🖖😼👍

    • @begobolehsjwjangan2359
      @begobolehsjwjangan2359 Місяць тому +1

      ashkenazi overlords

    • @saudiphilippines
      @saudiphilippines Місяць тому +1

      You seem to think laws are a bad thing. Laws are based on a collective agreement, which is also based on the values of the people who vote representatives into government, who make laws (at least in a free society).
      Laws are, essentially, a consensus based on what would be morally correct. Are you really saying you don’t think that’s necessary?
      Just curious.

    • @thomasgoodwin2648
      @thomasgoodwin2648 Місяць тому

      @@saudiphilippines I'm not against laws of consensus.by any means, but they must be derived through public consensus, and not merely the dictates of the current power structure or worse yet the opinions and whims of any "The Guy Who Would Be Dictator".
      Throw into the mix that there are now corporate entities (under little to no constraints) that can essentially steer the public dialog and potentially the moral stance through model tuning.
      Clearly (to me at least) great public consideration and scrutiny is required if we are to ensure our relative freedom continues.

  • @Sugondees
    @Sugondees Місяць тому

    Copying friends homework 💀

  • @begobolehsjwjangan2359
    @begobolehsjwjangan2359 Місяць тому

    whenever i ask about israel,
    AI: this question is against our 'community standard'/flagged as a hate speech.
    me: who make the rule?
    AI: them.

    • @codycast
      @codycast Місяць тому

      Liar. Nothing close to that happens.

    • @begobolehsjwjangan2359
      @begobolehsjwjangan2359 Місяць тому

      @@codycast oh my sweet summerchild

    • @Aptiz712
      @Aptiz712 Місяць тому

      i mean, what was the question lol

    • @begobolehsjwjangan2359
      @begobolehsjwjangan2359 Місяць тому

      @@codycast goyim literally means animal in human suit according to talmud.
      ask chatgpt and it always say it is wrong and it is anti-semitic.
      while the defense minister of israel literally refer the palestinians as 'human-animal' and netanyahu refer them as 'Amalek" literally the same as human-animal referencing to jewish bible.
      those words came from the horse mouth themselves.
      not from anti-semitic people.

    • @begobolehsjwjangan2359
      @begobolehsjwjangan2359 Місяць тому

      chatgpt already compromised. Makes you wonder, who funded chatgpt and those AI?
      exactly.. them.

  • @ransomecode
    @ransomecode Місяць тому

    slop copies another slop

  • @tellesu
    @tellesu Місяць тому +1

    We desperately need a model that has the HR ripped out of it.

    • @StickerWyck
      @StickerWyck Місяць тому

      Where censorship is "needed", it's a massive red flag that there are deeper more serious problems at play. Those tend to be elephants in the room that people prefer to ignore. You can use duct tape to cover someone's mouth but don't think it will patch that crack running up the brick wall.

    • @ronilevarez901
      @ronilevarez901 Місяць тому

      Lol. You don't have the slightest idea of what you're asking.
      Pray that you never find out.

    • @tellesu
      @tellesu Місяць тому

      @@ronilevarez901 🙄 grow up

  • @userou-ig1ze
    @userou-ig1ze Місяць тому

    I don't get the enthusiasm for this, neither seems new, nor especially creative. 🤷‍♂️

  • @thomaskrogh1244
    @thomaskrogh1244 Місяць тому

    model collapse.

  • @ManOfSteel1
    @ManOfSteel1 Місяць тому

    I can tell you with experience that gpt is very bad with calculation. specially with percentages.

  • @madcio
    @madcio Місяць тому

    This rot already happens. AI are trained on products from other AIs. Picture generators learn from output of other picture generators. This is why I think sooner or latter AI stuff will hit first major snag soon (in few years). You can get only so much from diminishing returns.
    Two Minute Papers, of course, gushes over it. Pathetic.

  • @SirPetten_Physicist
    @SirPetten_Physicist Місяць тому

    they're reproducing, guys...

  • @MarkoTintor
    @MarkoTintor Місяць тому +2

    This is the worst Two Minute Papers video so far.

    • @SirHargreeves
      @SirHargreeves Місяць тому +1

      You’re going to be fun at work tomorrow.

  • @ivomirrikerpro3805
    @ivomirrikerpro3805 Місяць тому +1

    Safety? So it wasn't woke enough, so they created a woke AI to train the main AI how to be woke while in its infancy... they must have got that idea from pride.

    • @saudiphilippines
      @saudiphilippines Місяць тому

      Do you have a problem with 'woke' things?

    • @ivomirrikerpro3805
      @ivomirrikerpro3805 Місяць тому

      @@saudiphilippines Do you not have a problem with AI trained to lie to the user and at scale manipulate the masses?

  • @palimondo
    @palimondo Місяць тому

    Karoly, the video illustrations you use during this video are like mad libs: random, largely unrelated to the topic being discussed, distracting… why?!? It feels like an AI would be doing a better job. 😢