Are We Back to Before? OpenAI 2.0, Inflection-2 and a Major AI Cancer Breakthrough

Поділитися
Вставка
  • Опубліковано 8 тра 2024
  • Hopefully the last in a trilogy of videos covering the saga at OpenAI. New revelations on the return of Sam Altman, internal investigations, Microsoft demands and employee revolts. Plus a major new model previewed from Inflection, Claude 2.1 and a major pancreatic cancer breakthrough with AI. More non-OpenAI news to come.
    WSJ Exclusive: www.wsj.com/tech/ai/altman-fi...
    Altman Tweet: / 1727207458324848883
    Emily Chang: / 1727228431396704557
    OpenAI Charter: openai.com/our-structure
    Geoffrey Irving Post: / 1726754270224023971
    Nadella Interview: • Microsoft Wants to Wor...
    Kevin Scott Outreach: kevin_scott/statu...
    NYT Exclusive: www.nytimes.com/2023/11/21/te...
    Toner Report: cset.georgetown.edu/publicati...
    Sam Altman Comments: • Sam Altman's World Tou...
    Open Letter from OpenAI: s.wsj.net/public/resources/do...
    Anthropic Merger: www.theinformation.com/articl...
    New Board: www.bloomberg.com/news/articl...
    Altman Blog: blog.samaltman.com/quora
    Nadella Statements: / 1726794158424424511
    satyanadella/stat...
    Larry Summers Interview: / 1644388988071886848
    Claude 2.1: www.anthropic.com/index/claud...
    / 1
    Inflection 2: www.forbes.com/sites/alexkonr...
    Cancer Detection: www.nature.com/articles/s4159...
    / aiexplained Non-Hype, Free Newsletter: signaltonoise.beehiiv.com/
  • Наука та технологія

КОМЕНТАРІ • 600

  • @toonv4023
    @toonv4023 5 місяців тому +384

    posted 24s ago and probably a new ceo already by now

  • @N22883
    @N22883 5 місяців тому +120

    I guess my biggest fear is that we’re placing Sam on too high of a pedestal, and he actually did do very unsafe things. I just hope the new board members won’t be all yes-men/women who won’t challenge safety concerns

    • @IdOnThAvEaUsE69
      @IdOnThAvEaUsE69 5 місяців тому +4

      I'd rather they be tho... The current system is kinda messed up, so if AI does take over... It'll only destroy these institutions, saving us time and money...
      Sam's World Coin was a good idea ngl. If AI can do all the necessary work like agriculture, transport, etc. We, humans, will have more time (and money from a UBI or UBR) to do what we want to.

    • @flightevolution8132
      @flightevolution8132 5 місяців тому

      @@IdOnThAvEaUsE69 You have no conception of the level of destruction AI could potentially bring us. Destroying those "institutions" is the least of universal life's concerns.

    • @haydnw869
      @haydnw869 5 місяців тому

      @@IdOnThAvEaUsE69the problem with UBI is the government has complete control over your income and they can cut you off if they don’t like you

    • @dizparkash
      @dizparkash 5 місяців тому +10

      @@IdOnThAvEaUsE69it’s never a good thing when you factor in who governs these systems and their intention with them. It’s never as altruistic as the marketing/brand/PR convey publicly

    • @IdOnThAvEaUsE69
      @IdOnThAvEaUsE69 5 місяців тому +2

      @@dizparkash Sam's already loaded with cash pretty sure. He earns more than he could ever spend in a day. What's the next step for a human after becoming a billionaire lol? He doesn't seem all that bad to me. Unless he's an anarchist, bro would only improve humanity, y'know.
      Unlike a certain CEO of Tesla and SpaceX over here...

  • @Strawberry_ZA
    @Strawberry_ZA 5 місяців тому +306

    I rely heavily on your informed and level headed reporting. Thanks!

    • @aiexplained-official
      @aiexplained-official  5 місяців тому +25

      Thanks Strawberry

    • @brooktewolde5775
      @brooktewolde5775 5 місяців тому +1

      I second this!

    • @jasonyocum36
      @jasonyocum36 5 місяців тому +4

      @@aiexplained-official Thought you just gave them a cute nickname cause I saw your comment before I saw their username

    • @guycomments
      @guycomments 5 місяців тому +7

      yup, this is the best AI channel by far, as far as I've found

    • @lespaceman
      @lespaceman 5 місяців тому

      ​@@brooktewolde5775I third this

  • @gavinbarrett-hayes
    @gavinbarrett-hayes 5 місяців тому +14

    That last moment of the video was really bittersweet. Lost my father to that type of pancreatic cancer just about a year ago. Glad others won't lose their parents like I lost my dad.

  • @FreestyleTraceur
    @FreestyleTraceur 5 місяців тому +54

    This channel is easily one of the best channels/podcasts/whatever for staying current on AI industry news. Strikes the right balance of overviews and deep dives.

  • @sebastiana3115
    @sebastiana3115 5 місяців тому +18

    Note on the paper in nature. The specificity and sensitivity might seem terrible, and this is because non-contrast CTs are not the preferred diagnostic modality for pancreatic cancer. In fact, non contrast CTs of the abdomen are quite rarely performed in the western world, at least in my hospital, as abdominal CTs benefit hugely from contrast (an injection that gives blood increased contrast). And so ordering a non contrast CT of the abdomen is only done in specific circumstances, making this use case somewhat limited. The fact that this is a limited scenario is probably why AI models perform better than radiologists, as this is not what they are usually the most practiced at.
    However, pancreatic cancers are often found incidentally by random CT scans, and of course this model improving the detection rate in this niche context is amazing, and no doubt only the beginning. This paper represents models creeping up the competence ladder, first in niche tasks, and soon perhaps in more common ones.

    • @aiexplained-official
      @aiexplained-official  5 місяців тому +1

      Amazing analysis, thanks sebastian

    • @lorenzoblz799
      @lorenzoblz799 5 місяців тому

      If I understood this correctly, in some comparisons, the radiologists (only) were given contrast CTs: "Notably, PANDA utilized with non-contrast CT shows non-inferiority to radiology reports (using contrast-enhanced CT) in the differentiation of common pancreatic lesion subtypes.". They also discuss why they focused on non contrast CT.

  • @RazorbackPT
    @RazorbackPT 5 місяців тому +21

    So what we learned is that breaks actually exist, but if anyone tries pushing on them, everyone will get very mad and threaten to go accelerate somewhere else. Great.

    • @ooooneeee
      @ooooneeee 5 місяців тому +5

      Yeah, it's sobering 😕.

    • @PazLeBon
      @PazLeBon 5 місяців тому +4

      brakes

    • @juliashearer7842
      @juliashearer7842 5 місяців тому

      This is exactly what it seems like

  • @andybaldman
    @andybaldman 5 місяців тому +139

    Funny how so many tech companies start out altruistic. But that all goes out the window once the numbers get big enough.

    • @Bhodisatvas
      @Bhodisatvas 5 місяців тому

      Always the dollars

    • @FreestyleTraceur
      @FreestyleTraceur 5 місяців тому +35

      It's always just marketing/branding/PR. Everyone loves a feel-good story and everyone loves to feel like the hero. The worst part to me is the personality cults that follow.

    • @neon_Nomad
      @neon_Nomad 5 місяців тому +15

      Power.. power always corrupts

    • @KitaTaki-mk3gt
      @KitaTaki-mk3gt 5 місяців тому +14

      I think at some point they reach the point of no return. They can’t stop the enterprise because too many parties with different interests are involved (remember the employees revolt ). From the video dixit Altman : “what if there’s something in these systems that was very difficult to see or understand … and now it’s out “. I wonder if they were to discover something like that …. Would they tell us … & try to stop the unstoppable train ? Or would they cover it up & hope for the best ?

    • @levifig
      @levifig 5 місяців тому +5

      "Don't be evil" comes to mind… 🫣

  • @tarwin
    @tarwin 5 місяців тому +126

    For the cancer detection I could see this being rolled out pretty quickly as a second opinion. Because if it's that much better you will get people sueing if it is not used.

    • @DaveEtchells
      @DaveEtchells 5 місяців тому +24

      Good point; can a doctor be sued for malpractice for not using it? (Actually I don’t think that can happen until it’s somehow blessed as the “standard of care” by some official entity. Still, it’s coming at some point.)

    • @maciejbala477
      @maciejbala477 5 місяців тому +2

      that's extremely interesting! certainly not an angle under which I considered AI developments yet

    • @ooooneeee
      @ooooneeee 5 місяців тому +1

      Nah, if it's not licensed for the use and hasn't gone through three phases of clinical studies you couldn't just sue for a license. False positives are harmful to the mental health of patients. It needs to have very few false positives and false negatives to be useful without being harmful.

    • @PazLeBon
      @PazLeBon 5 місяців тому

      @KohChanWai here we go with tthe spam bot scum

    • @raoultesla2292
      @raoultesla2292 5 місяців тому

      Good point. People will demand AGI decide how they are diagnosed. When the Barrister for wrongful death plaintiffs uses an AGI LLM to create the brief on medical treatment efficiency they will win.

  • @jordanledoux197
    @jordanledoux197 5 місяців тому +4

    The most charitable way of interpreting the board's actions that I can see is that they basically had a moment where they were like, "Sam Altman is a master manipulator narcissist, and he's so good at it that we can't even pull up specific examples, and that kind of person cannot be allowed to be the one to create the first AGI, and this is our very last chance to prevent that."
    There's obviously a lot of assumptions built into that interpretation, but like I said, I think that is the most charitable interpretation that fits all the statements and facts we know about.

    • @41-Haiku
      @41-Haiku 5 місяців тому

      I agree

    • @ooooneeee
      @ooooneeee 5 місяців тому

      Yeah it they are right it's a terrifying thing for the future of AI. 😱

  • @DaveShap
    @DaveShap 5 місяців тому +8

    "We can't really pinpoint what he did, but it's just a bad vibe." - Let's see how well this stands up in court.

    • @a.thales7641
      @a.thales7641 5 місяців тому +2

      That feels like a feminine reason.

    • @DavidGravesExists
      @DavidGravesExists 5 місяців тому +1

      It won't, but that doesn't mean it's not valid. Sometimes you just get sense that you can't trust a dude, and you later learn why.

    • @ooooneeee
      @ooooneeee 5 місяців тому

      But what if they are right? If Altman is a master manipulator who made sure that everything he did has plausible deniability? Who used the coup against him to come out on top. That possibility is disturbing.

  • @Allotropes
    @Allotropes 5 місяців тому +5

    Claude2.1's middle-of-the-document performance sag is well known in psychology circles as the primacy/recency effect. How odd that it affects machines too ;-)

  • @georgegordian
    @georgegordian 5 місяців тому +2

    One of the many reasons that this is the best AI channel is the fact that references to individual items are listed in the description. How wonderfully professional!

  • @esuus
    @esuus 5 місяців тому +12

    Yay, finally your video about it. Not just randos sharing their opinions. And 4 minutes fresh after you posting it, I got lucky.

  • @kabedford
    @kabedford 5 місяців тому +9

    Thank you VERY MUCH INDEED for your phenomenal coverage of not only AI developments, but especially this week's extremely fast-breaking OpenAI story. You've done fantastically well! I really appreciate your work! :)

  • @AllisterVinris
    @AllisterVinris 5 місяців тому +2

    Thank you for keeping us informed on the situation!

  • @BirgittaGranstrom
    @BirgittaGranstrom 5 місяців тому +3

    Thank you once again for a superb and brilliant report! Your ability to connect with "old information" surpasses most reporting in the AI field. Therefore, please continue to keep track of and trust your "predictions," even if you are wise enough not to publish them before the event you foresee has occurred.

  • @gball8466
    @gball8466 5 місяців тому +48

    A few things that stand out:
    1. Sataya is a gangster. He's no drama, extremely effective, and he put everyone in check.
    2. Toner's paper isn't an academic paper. It's an opinion piece.
    3. Equating safety with delaying a release isn't a given. You could argue that having millions of people using ChatGPT gave more actionable information regarding safety than keeping it in a box.

    • @nickb220
      @nickb220 5 місяців тому

      release as in making it open source?

    • @marc_frank
      @marc_frank 5 місяців тому

      ​@@nickb220publically available

    • @geometerfpv2804
      @geometerfpv2804 5 місяців тому +15

      Re: 2, not sure how experienced you are in academia, but research into ethics and philosophy is obviously incredibly subjective. It's still research. There are philosophy departments in every major university. When you are faculty at a university (unbelievable competitive), your opinion *is* scholarly knowledge. Not every field is a hard science.
      She made an incredibly mild statement: that ChatGPT launch was rushed (no one disagrees with this), and that Anthropic delayed (again, a fact), and that maybe Anthropic is being better about safety.
      How is this controversial? You could hardly say something more obviously true. She's a safety researcher. She is criticizing the safety of an unsafe AI company. OpenAI researchers the race condition, but does nothing to stop or slow it. That's not safety. The employees and Sam are clearly interested in competing and being the first. If AI safety researchers don't criticize that, they aren't doing their jobs.

    • @howtoappearincompletely9739
      @howtoappearincompletely9739 5 місяців тому

      @@geometerfpv2804 Good take.

    • @La0bouchere
      @La0bouchere 5 місяців тому

      ​@@geometerfpv2804Most philosophical works are just opinion pieces though. The only thing that makes them "research" is where they're published, not what the contents or processes behind them are.

  • @jsivonenVR
    @jsivonenVR 5 місяців тому +14

    Imagine the craziest outcome, multiply its ridiculousness by ten and you’re still short of the reality that has surfaced after you’re done.

  • @nacho7872
    @nacho7872 5 місяців тому +15

    Great video, thanks for reporting on this so quickly and yet giving us all the necessary context 👍👍

  • @Jordan-rv8gl
    @Jordan-rv8gl 5 місяців тому +14

    Jesus. What a weekend, indeed. Not sure how I feel about Sam returning. Not sure how I feel about the future in general tbh. Thanks for the solid reporting (as always).

    • @darrendoheny9768
      @darrendoheny9768 5 місяців тому

      The future is terrifying and optimistic all in one.

    • @Gabcikovo
      @Gabcikovo 5 місяців тому

      2:55

    • @Gabcikovo
      @Gabcikovo 5 місяців тому

      3:04 Geoffrey Irving from Google DeepMind accuses Sam Altman of lying to him on several occasions (for reasons), for being deceptive, manipulative, and only being nice to him while being worse to others including his close friends

    • @Gabcikovo
      @Gabcikovo 5 місяців тому

      3:38 Ms Toner 5:46 Sam Altman criticises the release of ChatGPT himself 6:25 6:28

  • @bupp291
    @bupp291 5 місяців тому +14

    Quite the roller coaster. Thank you for bringing the same level of professionalism in your reporting to this as you do all your AI news!

  • @Syphronix
    @Syphronix 5 місяців тому +6

    Top quality reporting and journalism; by far the best cumulative summary of the events to date.

  • @JohnLeMayDragon
    @JohnLeMayDragon 5 місяців тому +10

    Thanks for the informative video. Is there any way you could make a infographic of who the major players are and what their AIs are called? 😅 An overview would be nice. Or just a chart of users/computer/tokens etc. Deep dives into papers is always interesting and entertaining, but I'd feel more informed with a general view of the playing field every once in a while. Thanks again for making the best AI news.

  • @Serifinity
    @Serifinity 5 місяців тому +1

    Another great video Philip. Exciting to see so many important updates, especially with Pi and Claude 2.1. Thank you and looking forward to your big announcement soon 👍

  • @t3dotgg
    @t3dotgg 5 місяців тому +3

    Thank you for these updates man. You’ve consistently had the best coverage of this saga and I appreciate it immensely

    • @aiexplained-official
      @aiexplained-official  5 місяців тому +2

      Thanks so much t3, saw your twitter shoutout and replied! You inspired me to make account public.

    • @t3dotgg
      @t3dotgg 5 місяців тому

      @@aiexplained-official omg how did I miss that

  • @OurLifeisaMiracle
    @OurLifeisaMiracle 5 місяців тому

    Your work is one of the most valuable in here. Thank you so much for sharing!

  • @Modioman69
    @Modioman69 5 місяців тому +2

    You are doing great things for everyone by packaging this info up in a digestible length and manner in which mostly anyone can understand. I get excited when I see your videos in my feed because I know it’s always something worth watching and getting details. Keep up the good work you’re doing, I’d miss half the real substance in A.I development/news if not for this channel. 🙏🏻

  • @DaveEtchells
    @DaveEtchells 5 місяців тому +18

    It struck me as very interesting/odd that Claud 2.1’s retrieval rate took a major hit at ~30-34K tokens then got better at higher numbers before dropping once again. I wonder what that’s about?
    Another fantastic update, thanks!

    • @sebastianjost
      @sebastianjost 5 місяців тому +3

      I noticed the same. Maybe it's as simple as lack of training data of that length?
      Maybe there's a whole lot more to it. Definitely interesting.

    • @tracy419
      @tracy419 5 місяців тому +4

      Just goes to show how human it is.
      Pays pretty good attention at the beginning of the data, starts to nod off a bit as things move along, gets woke back up by an interesting noise in the data, then begins to nod a bit towards the end again.

    • @DaveEtchells
      @DaveEtchells 5 місяців тому

      @@tracy419 👍😂

  • @Dannnneh
    @Dannnneh 5 місяців тому +1

    Thank you, as always, for providing these updates, and have a wonderful day.

  • @ClayFarrisNaff
    @ClayFarrisNaff 5 місяців тому +29

    Bravo! (Once again.) The behavior of the OpenAI board shows how domain-specific human smarts can be. These are super-achievers in their fields. Yet, having served on boards and having served boards as an executive director, I'm astounded at how badly as board members they handled their concerns.

    • @idcidcidcidcidcidcidc
      @idcidcidcidcidcidcidc 5 місяців тому +6

      You have absolutely no idea what Sam did. For all you know, Sutskever could have made a completely rational decision based on what he knows about Sam. If you still insist on disagreeing, I'd love to hear about your confidence in your ability to make smarter decisions than Ilya Sutskever in literally any domain imaginable.

    • @ClayFarrisNaff
      @ClayFarrisNaff 5 місяців тому

      You misunderstand me. Of course I don't know what prompted the board to act. What I know is that they acted in a most irresponsible fashion. If you're going to fire your CEO, you have to have a well documented case, and you have to give the person a chance to express their views. You also have to let your stakeholders know what's going on. Evidently none of this happened. Procedurally, it was a farce.

  • @stephenrodwell
    @stephenrodwell 5 місяців тому +1

    Thanks! Wild times, with big stakes. Thanks for guiding us through them. 🙏🏼

  • @RolandPihlakas
    @RolandPihlakas 5 місяців тому +1

    I like that you stay respectful and do not imply that people who are stronger or more popular are necessarily the ones who are right.

  • @Michael-ul7kv
    @Michael-ul7kv 5 місяців тому +1

    Appreciate your coverage and looking forward to getting back to your regular updates.

  • @patronspatron7681
    @patronspatron7681 5 місяців тому +1

    Whatever the subject matter you bring insight, clarity, humour and hope. I always feel more positive about AI and the world at large after listening to your pods.
    Thank you.

  • @normalgoat6419
    @normalgoat6419 5 місяців тому +162

    The AI cancer stuff is what AI all should be about

    • @Ofer_Davidi
      @Ofer_Davidi 5 місяців тому +3

      True, AI have a great capability when it comes to pattern detection, and it is something we should use much more than the intelligent part of it that is... something humans are doing very-well, as much as we can 😜... If is is not clear I will say out-load, AGI is nice idea but should be kept in deep freezer for later use 😁

    • @wytho3751
      @wytho3751 5 місяців тому

      Hell yeah! Fuck cancer! Get after it, electric friends!

    • @SirQuantization
      @SirQuantization 5 місяців тому +15

      Agreed. Medical advancements should be #1 priority.

    • @ClaireFrancePerezWonderer
      @ClaireFrancePerezWonderer 5 місяців тому

      At a doctor's office of course...

    • @deandrealexander6172
      @deandrealexander6172 5 місяців тому

      ​@@ClaireFrancePerezWondererno for free public access

  • @Fredekkkkkk
    @Fredekkkkkk 5 місяців тому

    Favourite UA-cam channel. You do an awesome job!
    With AGI seemingly just around the corner should we be asking ourselves what will matter in the post-AGO world?

  • @consultantnigel-projectman7274
    @consultantnigel-projectman7274 5 місяців тому +1

    Spectacular analysis, as usual. Thankyou!

  • @davidball8794
    @davidball8794 5 місяців тому +2

    Thanks...this is my go-to destination to hear signals amongst the noise. Appreciated.

  • @michaelmartinez6033
    @michaelmartinez6033 4 місяці тому

    Thanks for including the links to the articles and news clips you mention.

  • @UncleJoeLITE
    @UncleJoeLITE 5 місяців тому +1

    Another outstanding presentation. Thanks from Australia.

  • @Zilgaro
    @Zilgaro 5 місяців тому +28

    It's not just a revolving door, it's a damn carousel now!

  • @CarlosHfam
    @CarlosHfam 5 місяців тому +1

    You're the best go to on this fast changing news! Thank you for your efforts!

  • @uyaratful
    @uyaratful 5 місяців тому +1

    I'm extremly thankfull to you for your work. I really hope that your initiative will grow.

  • @Rawi888
    @Rawi888 5 місяців тому

    God, I'm struggling with uni (I don't know if I made this second year of CS 😢 was pretty distracted and distressed the whole year) and now I'm being told I'm probably going to get replaced.
    Oh well.
    You're one of my biggest inspirations so I'll probably just follow in your footsteps and start creating content.
    Wish me luck my friends.
    I also hope your endeavours will be met with success. Let's hit the new year with momentum.

  • @rohitmadhavan16
    @rohitmadhavan16 5 місяців тому +1

    You have gained 200K subscribers and it has not even been a year. Thts great man!

  • @williamjmccartan8879
    @williamjmccartan8879 5 місяців тому +1

    Thank you Phillip, I think its good news that Sam is back with Open AI, as this probably would have meant a set back in the time-line for reaching agi, thank you again for all of your hard work and sharing those efforts with us, have a great night, peace

  • @stephendebeauchamp356
    @stephendebeauchamp356 5 місяців тому +1

    I wasn't sure about the whole Emmett Shear thing, but based on what happened, good job man.

  • @solaawodiya7360
    @solaawodiya7360 5 місяців тому +29

    What a crazy weekend. At the end of it, Microsoft still wins. Something tells me the new governance changes will be more profit focused (if there is still a non-profit agenda). Thanks for the breakdown Philip. I really appreciate you simple summarization of these nuanced topics 👏🏿❤️

  • @darrendoheny9768
    @darrendoheny9768 5 місяців тому +1

    Great video as always. Thank you!

  • @flixperience
    @flixperience 5 місяців тому +3

    Youre doing a great job. Love your calm down-to-earth approach and high quality analysis :)

  • @ParameterGrenze
    @ParameterGrenze 5 місяців тому +1

    You are a hero! The information space around the Sam-Event is clauded with strong and loud personal onions of people with biases. Your reporting is a what I would consider being an reasonable, neutral and well sourced attempt of providing the core narrative.

  • @shanegleeson5823
    @shanegleeson5823 5 місяців тому +9

    Would be interesting to see a video on how we can define AGI. I'm sure there are lots of academic approaches your channel would be especially good at evaluating.

    • @PazLeBon
      @PazLeBon 5 місяців тому

      it simply is not intelligent

  • @anonymes2884
    @anonymes2884 5 місяців тому +9

    Useful analysis as usual, cheers. All quite unsettling IMO. Altman's back and the people forced out seem to be those _most_ concerned about safety. The OpenAI employee revolt is sweet on one level but they had no idea _why_ he went and supported him anyway - that's not a rational response.
    So we're all left hoping Sam Altman doesn't make a wrong move because this episode has shown that he's basically immune even to what _may_ have been legitimate oversight.

    • @howtoappearincompletely9739
      @howtoappearincompletely9739 5 місяців тому +2

      Precisely my concern, too.

    • @tbird81
      @tbird81 5 місяців тому

      "Safety" just means censorship. Stop being doomers.

    • @ooooneeee
      @ooooneeee 5 місяців тому +2

      Agreed 💯. The big support of him without knowing why he was fired could be a personality cult. That open letter from his supporters didn't read very rational either, just accusing the board of a ton of stuff solely based on them not telling the employees the reasoning for their action. Not communicating sucks, but not all the reasons for it are nefarious. The board could have been in a lose-lose situation no matter how much they communicated.

  • @FRC_CR
    @FRC_CR 5 місяців тому +1

    Dude great video! Thanks for all the coverage, great work :)

  • @purovenezolano14
    @purovenezolano14 5 місяців тому

    I know it's been said already - but damn you are an excellent reporter of info. Not sensationalist - just the facts and unbiased as can be

  • @leighreynolds8761
    @leighreynolds8761 5 місяців тому +1

    Your reporting is well done. I can trust it. Thank you .

  • @louislecoeur2403
    @louislecoeur2403 5 місяців тому +1

    Really informative, deeper than most articles I’ve read.

  • @JackTheOrangePumpkin
    @JackTheOrangePumpkin 5 місяців тому +1

    two videos in quick succession? What a treat!

  • @Roma88572
    @Roma88572 5 місяців тому +2

    I can just imagine AI Explained’s war room looks like the wall from Charlie in the Always Sunny meme with all the chaos going on right now. Thanks for keeping us up to date sir.

  • @capitalistdingo
    @capitalistdingo 5 місяців тому +4

    They became “increasingly uneasy”.
    Maybe they should see a healthcare practitioner about that.

  • @NeuroScientician
    @NeuroScientician 5 місяців тому +8

    Kind of obvious that the openAI is just a business unit of Microsoft at this point.

  • @daverei1211
    @daverei1211 5 місяців тому +3

    Thank you again for all of this great investigative reporting. Sorry that it’s probably a significant distraction from your research, but we appreciate your detailed interpretation.

  • @Hitjuich
    @Hitjuich 5 місяців тому +22

    I am glad that the focus can return to the development in AI again

    • @aiexplained-official
      @aiexplained-official  5 місяців тому +7

      Thar cancer detection is amazing

    • @Citrusfemboy
      @Citrusfemboy 5 місяців тому +1

      @@aiexplained-official That there cancer detector.

  • @nicholasboyd-gibbins9763
    @nicholasboyd-gibbins9763 5 місяців тому

    Love it as always man thank you

  • @martinpercy5908
    @martinpercy5908 5 місяців тому +1

    thanks philip great work as always

  • @R0cky0
    @R0cky0 5 місяців тому +4

    What really lies uncertain is how and where Ilya is gonna end up. I'd imagine his days in the "new" OAI won't be as comfortable as before

    • @tbird81
      @tbird81 5 місяців тому

      Hopefully not. But probably too spergie to even regret it.

  • @watcherofvideoswasteroftim5788
    @watcherofvideoswasteroftim5788 5 місяців тому +1

    Very good video, I really like and appreciate your coverage! One bit of feedback, and I hope you see how well you've perfected your format based on this, is that I'd like your videos in dark mode lol. My eyes get strained :(

  • @MadeOfParticles
    @MadeOfParticles 5 місяців тому +7

    The problem is that it doesn’t matter if one company decides what AGI is. I believe all company research labs should collaborate to define AGI. Otherwise, OpenAI won’t be able to distance itself from Microsoft in the future, because the clause in their agreement is currently very ambiguous.🤔

    • @CatfoodChronicles6737
      @CatfoodChronicles6737 5 місяців тому

      If it has a an IQ of 100

    • @ADreamingTraveler
      @ADreamingTraveler 5 місяців тому +3

      Exactly. There is no one agreed upon definition on what AGI is

    • @WarClonk
      @WarClonk 5 місяців тому

      They should have an army of laywers and scientists make a water tight definition. The way it is setup now is clear it will cause very big problems. Microsoft will probably claim it will have to walk around in a robot just like a human and do everything that a normal or smart human can. Meanwhile every white collar worker is losing their job to Microsoft bots.

  • @Lishtenbird
    @Lishtenbird 5 місяців тому +21

    Things can't really be "back" now that the whole world saw how people in charge of AGI aren't even in control of their own organization. I'd even say that with a display as spectacular, one has to wonder if this was intentional.

    • @ticketforlife2103
      @ticketforlife2103 5 місяців тому +1

      The average Joe doesn't gaf

    • @Lishtenbird
      @Lishtenbird 5 місяців тому +6

      ​@@ticketforlife2103The average Joe may not care, but competitors and governments now have a new thing to point at when heavier regulation becomes desirable.

    • @NitFlickwick
      @NitFlickwick 5 місяців тому +4

      Never attribute to malice that which can adequately be explained by stupidity. I am convinced that Ilya believed he understood what the company’s reaction would be to firing Sam (being a genius in one area does not make one a genius in every area, but smart people often don’t realize that). The rest of the board believed him and felt replacing Sam was the right call, but Ilya got it all wrong, and everything exploded.
      Just the fact that the board didn’t think it was worth communicating with Microsoft in advance shows that the board only considered this from the standpoint of inside OAI. No thought was given to the impacted outside of OAI.

    • @AcidArmy_
      @AcidArmy_ 5 місяців тому +4

      The board didn’t suddenly appear out of nowhere they just exercised their power in a way they were allowed to

    • @rewindcat7927
      @rewindcat7927 5 місяців тому

      When this sort of thing happens in a country at war, there is little doubt about outside interference.

  • @nextinstitute7824
    @nextinstitute7824 5 місяців тому +1

    Thank you for this video!!!!!!!!!!!! You are the only one who truly understands the issue. The incredibly all-important issue: is OpenAI going to be for the people...?

  • @asamirid
    @asamirid 5 місяців тому +1

    very detailed and informative, huge effort, thank you 💚💚..

  • @Justsomeone99987
    @Justsomeone99987 5 місяців тому +3

    I know someone whose first day at OpenAI was this Monday. Can you imagine lol

  • @BrianGlaze
    @BrianGlaze 5 місяців тому +2

    I agree with your assessment here. The term AGI needs to have very specific and defined criteria for what the definition is. Instead of saying "most tasks", the tasks need to be specifically laid out and stated. Even then, I don't think we'll all necessarily agree that it is indeed some type of autonomous system but it would help in terms of legality in business issues that could come up with Microsoft and the world at large.

  • @cruz1ale
    @cruz1ale 5 місяців тому +1

    Can't wait to hear what you have to say about the open letter from former employees to the board

  • @netscrooge
    @netscrooge 5 місяців тому

    Your being right about Anthropic shows your deep understanding. Impressive. Most other channels were just reading headlines. You certainly should pat yourself on the back!

  • @akow2655
    @akow2655 5 місяців тому +9

    Really hope that we don't take the wrong lessons from this, the board and especially Ilya propably had justified concerns that backfired immensely and a lot of people jumped on the "EA doomers are cancer" bandwagon. This technology is the future, safety is a concern (and I don't mean skynet doomerism, but just proliferation of dangerous information) and this chaos just shows that we have a lot more work to do in all regards before we as a species are ready to unleash whatever full potential this technology has in store.
    We're in for a wild ride and I think we're still standing in line, not even strapped into the rollercoaster yet.

  • @YuraL88
    @YuraL88 5 місяців тому +2

    It's interesting that human memory also has a similar property: we are more likely to remember facts at the beginning or the end of the story, or words/numbers at the end and the beginning of a list.

    • @felixgarciaflores
      @felixgarciaflores 5 місяців тому +2

      these parts often include outlines and conclusions that summarize the content of the text so it makes sense to focus more on them

  • @tomaszkarwik6357
    @tomaszkarwik6357 5 місяців тому +1

    15:32 OMG +6% specificity over mean doctors. That is and awesome result

  • @zenplayer3012
    @zenplayer3012 5 місяців тому +1

    Great reporting mate 😎👍

  • @Redflowers9
    @Redflowers9 5 місяців тому +1

    I know people talk about experts "moving the goal posts" when defining AGI but I would be pretty convinced that we have AGI once the AI starts upgrading itself without human intervention.

    • @ea_naseer
      @ea_naseer 5 місяців тому +1

      I don't see it as shifting goal posts. In AI there are different philosophies based on what different people hypothesize is the foundation of intelligence. Some people think AI is human intelligence some others think AI is rationality. In the 50s up until the 200s some thought AI was symbolic systems. Whenever we prove that a particular hypothesis is wrong or is not the foundation of intelligence we move to another. We thought for a while that if we had a system that can understand language then we have intelligence but now that we are there with GPT we see that this system fails at some modes of syllogism and thus we "shift the goal posts" to another school of thought.

  • @Adeoxymus
    @Adeoxymus 5 місяців тому +4

    My thought this morning upon hearing the news: poor Philip will miss his breakfast again 😂

  • @NowayJose14
    @NowayJose14 5 місяців тому +1

    We're stronger than ever baby😤

  • @ce9916
    @ce9916 5 місяців тому +2

    Because I typically watch your vids during lunch, this vid made me hungry 😭

  • @MercurialAscent
    @MercurialAscent 5 місяців тому +1

    Wow! That was intense. Phew!

  • @MunirJojoVerge
    @MunirJojoVerge 5 місяців тому +5

    I wonder what's going to happen to Ilya S. I think his reputation has been dented as a person clearly involved in this dark issue.
    As usual, Phillip, thank you very much for sharing you POV

  • @rickandelon9374
    @rickandelon9374 5 місяців тому

    Awesome Awesome Awesome. Satya will be remembered in the history books of post AGI world and the world will applaud him for keeping OpenAi, the most important company for bringing utopia from collapsing. 🎉🎉😢❤

  • @garythepencil
    @garythepencil 5 місяців тому +1

    summary: business stuff that doesn't matter at all, a cancer that doesn't matter in terms of ai because it's just image recognition, and finally something actually interesting: the claude 2.1 token increase. i wonder what can be done to fix the middle errors, and why it does so well at the beginning and end of the text. reminds me of the serial position effect from psychology, where humans remember the beginning and end of a sequence much better than the middle. maybe people write in more predictable ways at the beginning and end of text?

  • @thorntontarr2894
    @thorntontarr2894 5 місяців тому

    The final though RE: Pancreatic Cancer and the earlier opinion of Larry Summers: Affect Doctors before Nurses are my key takeaways for your work which informs my understanding. The weekend news RE: firings is quite distracting but when $$ enters, out goes altruism.

  • @TiagoTiagoT
    @TiagoTiagoT 5 місяців тому +2

    Strong vibes that some severely shady stuff is taking place behind the scenes that we are not being told...

  • @alexgonzo5508
    @alexgonzo5508 5 місяців тому

    9:17 - notice the grey book, top left titled "Accelerate". That wall unit belongs to Nadella.

  • @serotragtmantel3982
    @serotragtmantel3982 5 місяців тому +1

    thank you!

  • @ericstromquist9458
    @ericstromquist9458 5 місяців тому +1

    I think Adam D'Angelo will be played by Tom Felton, since D'Angelo looks so much like Draco Malfoy. But more to the point, I'm surprised D'Angelo is the one left standing since his clear conflict of interest as the CEO of the company trying to market Poe vs. OpenAI's new custom GPTs meant that he should have recused himself from the vote that started all this, if not resigned from the board altogether.

  • @mattgenaro
    @mattgenaro 5 місяців тому +2

    Succession Season 5 is fire!

  • @centurionstrengthandfitnes3694
    @centurionstrengthandfitnes3694 5 місяців тому +1

    The least sensationalist and drama-obsessed of the AI channels I follow - therefore, the best.
    Keep it up, AIE.

  • @GestaltReality
    @GestaltReality 5 місяців тому +1

    Oh, also next thing to report on would be Q*, the possible AGI like breakthrough.

  • @Ecthelion3918
    @Ecthelion3918 5 місяців тому

    Glad to hear the drama is over, and glad to hear about the healthcare applications of AI. It's always been one of the areas that I'm most excited about

    • @skierpage
      @skierpage 5 місяців тому

      The drama is far from over. There's still the report on what happened with SamA, what happens to Ilya, what happens to the 70 employees who _didn't_ sign the letter demanding the board resign, weather the new board scraps that whole "non-profit aligned for good" organization for a "SamA can do no wrong with his leadership and dodgy outside startups" charter (szme as RIP Google's "don't be evil" motto)...
      I worked at a company where a lot of staff wanted the CEO out, and I was pressured to sign a letter to that effect. I refused because I didn't know the details and had no inside information apart from what agitators were presenting, and some coworkers viewed me as disloyal. These intrigues inevitably create bad blood and there are hundreds of tech journalists still looking for a scoop and inside dirt who will ensure it comes out.

    • @Ecthelion3918
      @Ecthelion3918 5 місяців тому

      @@skierpage You are right, I'm sure there's still quite the turmoil at OAI

  • @r-saint
    @r-saint 5 місяців тому +2

    First of all, congrats on 200K subs.
    Second... this whole story is fascinating. Sam is truly a prophet, and OpenAI could not exist without him. And Sutskever is the prodigy, and they're like 2 most important people, they SHOULD work together in order for the prophecy to work xd

    • @skierpage
      @skierpage 5 місяців тому +2

      In no way is Sam a prophet, he's just a smart founder (which is no small thing). Obviously OpenAI would exist without him. Stop putting people on pedestals!

  • @TenOrbital
    @TenOrbital 5 місяців тому +1

    Now it’s come out the whole fracas was over a potential AGI breakthrough, which is what the old board claimed was being hidden from them.

  • @Theone-ou2xt
    @Theone-ou2xt 5 місяців тому +2

    There are news about a development at openai something referred as q * which employees thought might be a significant step towards AGI or ASI and warned the board which lead to sam altmans firing because he was moving too fast .(not sure how true these news are but sam altmans comment about veil of ignorance hint towards something of this nature )