Greg Brockman: OpenAI and AGI | Lex Fridman Podcast #17

Поділитися
Вставка
  • Опубліковано 8 січ 2025

КОМЕНТАРІ • 282

  • @lexfridman
    @lexfridman  5 років тому +122

    This was a thought-provoking conversation about the future of artificial intelligence in our society. When we're busy working on incremental progress in AI, it's easy to forget to look up at the stars and to remember the big picture of what we are working to create and how to do it so it benefits everyone.
    0:00 - Introduction
    01:15 - Physical vs digital world
    02:30 - Mind: math or magic?
    03:26 - Civilization as intelligent system
    07:45 - First question for AGI
    10:10 - Keeping AGI positive
    15:45 - Teaching a system to be good
    18:15 - OpenAI's mission origins
    26:22 - OpenAI LP creation
    28:24 - Preserving mission integrity
    30:10 - Decision-making process
    32:40 - Scrutiny burden
    33:20 - For-profit AGI for world benefit
    37:50 - Charter's daily impact
    40:27 - Late-stage AGI collaboration
    42:08 - Government's role in AGI policy
    44:53 - GPT-2 release concerns
    50:30 - Internet bots
    57:37 - Unsupervised language processing potential
    59:20 - Language modeling and reasoning
    1:01:45 - General vs fine-tuned methods
    1:03:49 - Democratizing compute resources
    1:05:27 - Government-owned compute utilities
    1:07:11 - Identifying AGI without compute
    1:09:30 - DOTA
    1:15:26 - Deep learning future
    1:15:59 - Scaling projects vs new projects
    1:17:47 - Testing and impressions
    1:18:47 - OpenAI's challenges
    1:19:19 - Simulation
    1:20:00 - Reinforcement learning future
    1:21:25 - Consciousness and body for AGI
    1:24:24 - Falling in love with AI

    • @TristanCunhasprofile
      @TristanCunhasprofile 5 років тому +1

      Any thoughts on if we'll eventually get people to agree on a good definition of AI? (or even of just intelligence definitionmining.com/intelligence)

    • @kev9797c
      @kev9797c 5 років тому +3

      thanks for spreading such a positive message! a lot of people share the same dream. at this point we really can feel more hopeful about the positive outcomes agi could create

    • @marquardtfrickert3939
      @marquardtfrickert3939 5 років тому +1

      Love it man!! @Lex Friedman!
      Why don't you go work for OpenAI??? I think it's super important to make this stuff save, like Elon said! :)

    • @vaibhavbv3409
      @vaibhavbv3409 5 років тому

      what happens to jobs

    • @TBOBrightonandHove
      @TBOBrightonandHove 5 років тому

      Hi Lex, I love learning about this AI stuff and hearing all the brilliant people you assemble share their thoughts and passions - what a privilege! Apologise in advance, but can't help but respond to the 'look up to the stars' existential comment with the best of what I have come across recently, so forgive if this seems totally irrelevant (99.99% will think so): How big is the bigger picture? See latest video/notes by another fellow Russian explorer of the human psyche: ua-cam.com/video/ywHfNSwcCS8/v-deo.html

  • @technotarzan4044
    @technotarzan4044 Рік тому +17

    coming back to listen to this through the lens of the last 4 years is absolutely mind blowing. Especially with the last week!

  • @alicethornburgh7552
    @alicethornburgh7552 4 роки тому +88

    Outline:
    1:15 difference between physical world and digital world
    2:30 is the mind just math, or is it magic somehow?
    3:26 civilization as an intelligent system
    7:45 if you created an AGI system, what would you ask it first?
    10:10 thoughts on how focused people are on negative effects of AGI
    12:56 difficulty of keeping AGI on a positive track?
    15:45 is it possible to teach a system to be "good"?
    18:15 origins of OpenAI's mission to create beneficial, safe AGI
    26:22 what is OpenAI LP and how did you decide to create it?
    28:24 how will you make sure other incentives don't interfere with your mission?
    30:10 what were the different paths you could have taken and what was that process of making that decision like?
    32:40 burden of scrutiny
    33:20 as a for-profit company, can you make an AGI that is good for the world?
    37:50 how does the charter actualize itself day-to-day
    40:27 switching from competition to collaboration in late-stage AGI development
    42:08 the role of government in setting policy and rules in this domain
    44:53 you released a paper on GPT 2 language modeling, but didn't release the full model because you had concerns about the possible negative effects of its availability. What are some of the effects you envisioned?
    50:30 thoughts about bots on the internet
    57:37 how far can unsupervised language processing take us?
    59:20 if you just scale language modeling, will reasoning capabilities emerge?
    1:01:45 is a general method better than a more fine-tunes method?
    1:03:49 do we need to democratize compute resources more or as much as we democratize algorithms?
    1:05:27 do you see a world where compute resources are owned by governments and provided as utility?
    1:07:11 would you be able to identify AGI without compute resources?
    1:09:30 story of DOTA, leading up to OpenAI 5
    1:15:26 where do you see deep learning heading in the next few years?
    1:15:59 when you think of scale, do you think about scaling projects or adding new projects?
    1:17:47 testing / what would impress you?
    1:18:47 exciting and challenging problems for OpenAI
    1:19:19 simulation
    1:20:00 hopes for the future of reinforcement learning and simulation
    1:21:25 are consciousness / a body necessary for AGI?
    1:24:24 will we ever fall in love with an AI

  • @m.branson4785
    @m.branson4785 Рік тому +71

    It's wild listening to this 3 years later as GPT-4 has been released.

    • @bokoma96
      @bokoma96 Рік тому

      And now, after his TED Talk ua-cam.com/video/C_78DM8fG6E/v-deo.html

    • @mcrenn5350
      @mcrenn5350 Рік тому +4

      Ikr! Was here literally for that. They knew... they were on the cusp of everything!

    • @Zyntho
      @Zyntho Рік тому +6

      And now after he left the company in protest.

    • @m.branson4785
      @m.branson4785 Рік тому +7

      @@Zyntho Yeah, I had to come back to listen to this one and also the interview with Ilya.

    • @meartin
      @meartin Рік тому

      Yessir 😅

  • @aqynbc
    @aqynbc 5 років тому +17

    We need more of this type of discussion. Thank you Lex for taking the time to do just that.

  • @newspeed8000
    @newspeed8000 5 років тому +11

    amazing, this is the type of most important discussion that everyone on this planet should be having right now instead throwing stones at each other, loved it!

  • @RubenAlvarezMtz
    @RubenAlvarezMtz 5 років тому +142

    What's with the subliminal pictures of Mr Lex in the video? :P

    • @lexfridman
      @lexfridman  5 років тому +136

      Very strange. I see it now, like at 5:48 where my face appears for a single frame. I believe it's me from the future trying to warn humanity about AGI. Either that or it's my sleep-deprived brain screwing up the editing somehow. EDIT: UA-cam now told me it's their bug. Hopefully gets fixed soon. EDIT 2: UA-cam emailed me on Jun 6, 2019 and said the bug is fixed. It took a couple months, but they got it done. Great work.

    • @RubenAlvarezMtz
      @RubenAlvarezMtz 5 років тому +9

      @@lexfridman or d) all of the above :p

    • @aigen-journey
      @aigen-journey 5 років тому +5

      @@lexfridman also around 1:18:05 Took me a few tries to freeze frame at the right moment :)

    • @MrSushant3
      @MrSushant3 5 років тому +5

      @@lexfridman No, it's not you, it's UA-cam. I've come across multiple similar complaints from other UA-camrs as well, esp. for long videos.

    • @lup9346
      @lup9346 5 років тому +5

      YOU MUST OBEY LEX

  • @memorabiliatemporarium2747
    @memorabiliatemporarium2747 5 років тому +13

    Lex, you're one of the few uploading actually important content to UA-cam. I appreciate it, dude. Thanks and please, keep it up!
    Just started this one and I know it is going to make me think through out all of it...

    • @glorydey5008glowlight
      @glorydey5008glowlight 5 років тому

      Hi, Greetings! If You Are Interested In Similar Science And Technology Content Check Out Podcasts Platform Like Castbox (www.castbox.fm) and Google Podcast. There Are Many Educative Science, Technology, And Other Varied Topics Related Channels Which You Can Learn And Enjoy. I Regularly Listen To Various Podcasts. They Give A Lot Of Knowledge On Many Subjects. Just Wanted To Let You Know. Have A Good Day! Regards!

  • @InfoJunky
    @InfoJunky Рік тому +17

    Bring him back! GPT 4 and plugins is bananas!!! Let's hear his thoughts! He might be REAL busy right now though!!!

  • @bleachbucket9440
    @bleachbucket9440 5 років тому +6

    Thx for all your recent interviews. You've helped to expand my mind with this profound information

    • @sjwmemer4840
      @sjwmemer4840 5 років тому

      good to know we got clown of the day working on ai for us

  • @bradwrobleski666
    @bradwrobleski666 5 років тому +12

    One of the best interviews of a great mind and great ideas. Period.

  • @sreramk1494
    @sreramk1494 5 років тому +7

    8 minutes through the video... Awesome podcast! Thanks! It really feels like OpenAI has a clear view on the properties of AGI. Never seen this clarity before (or I guess I haven't been looking hard enough). The analogy with a company having a will on its own... it's a really a good one! Smaller systems, confined to very specific tasks which are unrelated to the main objective, may not seem to be individually working towards the main objective, but it might be possible to reveal that the system actually moves towards the global objective by observing the overall functioning of the system. Viewing a system collectively, thus projects a different view from viewing each of the individual elements of the collective system separately.

    • @RogerFedTennis
      @RogerFedTennis 5 років тому +1

      Yeah, the analogy to a corporation is spot on. Why? Because a corporation has no consciousness. It is complex and it even has conscious components, and yet, there is nothing upstairs-- it doesn't have agency, it is being dragged along by various actors acting collectively to some degree of efficiency or another. Of course, it may be best, really, if a machine with super human capabilities not have consciousness-- otherwise, ethics and morals may require granting it legal rights, at which point we have citizens who are much superior to human citizens.

  • @GregGBM7
    @GregGBM7 5 років тому +40

    after losing last august, OpenAI was finally able to beat the best human team in 5v5 Dota 2 just a few days ago. It was incredible to watch!

    • @doubleggamingmeruz678
      @doubleggamingmeruz678 5 років тому +4

      But it was only because of the limited hero pool if open ai play real game of DotA they won't even beat armatures .

    • @GregGBM7
      @GregGBM7 5 років тому +1

      @@doubleggamingmeruz678 I noticed that too when they had OpenAI play pub games a few days later. The limited hero pool and preplanned item builds leave alot to be desired.

    • @ImperialGuardsman74
      @ImperialGuardsman74 4 роки тому

      Tbf the best team in dota's best strength is not good vs AI. They're famous for psychological warfare. As in not breaking but doing little and many things aimed at disnerving or confusing or disheartening the other team. They can't do that vs AI. The AI probably still beats the 2nd best team too though i guess, not sure if they ever tried.

    • @wyqtor
      @wyqtor Рік тому

      I am from 4 years into the future; you ain't seen nothing yet!

    • @GregGBM7
      @GregGBM7 Рік тому +1

      @@wyqtor 4 years later and still no AGI, smh

  • @vznquest
    @vznquest Рік тому +12

    lex we need more guests like this with everything happening now...

    • @galaxyw5545
      @galaxyw5545 Рік тому +3

      he was the reason behind everything that's happening now..

    • @nicolasdominguez1890
      @nicolasdominguez1890 Рік тому +1

      Ask and ye shall receive, hahaha you now have sam altmann interview, and another one i have not yet seen

  • @egorpanfilov
    @egorpanfilov 5 років тому

    An amazing interview! Greg exposes and explains many details of OpenAI concept which have not been widely known so far. Thank you, Lex, great work in driving and shaping the conversation!

  • @kaziboy264
    @kaziboy264 5 років тому +6

    This is one of the most interesting interviews out there

  • @PhillipRhodes
    @PhillipRhodes 5 років тому +12

    Lex, can you do an interview with Ben Goertzel at some point? He'd be a great addition to this series. Also, maybe Marcus Hutter or Pei Wang?

  • @ErikKislikChessSuccess
    @ErikKislikChessSuccess 5 років тому +4

    Nice work Lex, this was a refreshing and relaxing discussion on big, big topics.

  • @Curious112233
    @Curious112233 5 років тому +9

    44:50 I'm shocked, Open AI was suppose to be open and share its AI developments with the world. But as soon as they develop anything really good they declare it unsafe to release, and therefor keep it private. If that is their policy, then there is nothing open about open AI. They are hypocrites, promoting the image of openness, while holding back and presumably benefiting from their best discoveries. Its fine if they want to keep their developments private, but don't also claim to be open at the same time.

    • @owndoc
      @owndoc 4 роки тому +2

      On the other hand, none of their "discoveries" have anything to do with AI. They're like Theranos - taking massive funding and delivering nothing. True AI can THINK, REASON, ARGUE, PLAN, EXPLAIN, UNDERSTAND cause and effect.

    • @teslatonight
      @teslatonight 2 роки тому

      🤖🧡

    • @JetLee1544
      @JetLee1544 Рік тому

      @@owndoc turns out "OpenAI" became the fastes growing company in terms of users.

    • @wyqtor
      @wyqtor Рік тому

      @@owndoc This comment didn't age well.

  • @rickharold69
    @rickharold69 5 років тому +3

    Beautiful. Love it! Thanks for the interview as always!

  • @qianma853
    @qianma853 Рік тому +2

    Can’t believe the conversation is 4 years ago, very insightful

  • @alexwhb122
    @alexwhb122 5 років тому +1

    truly fantastic discussion. Thanks for posting and please keep them coming.

  • @MrHaqri
    @MrHaqri 5 років тому +58

    I think George Hotz of CommaAI would be a great guest for the podcast.

    • @zelllers
      @zelllers 5 років тому +2

      It would be interesting for sure

    • @kamilmazurek6070
      @kamilmazurek6070 5 років тому

      He left CommaAI, check out his linkedin
      Edit: Also mentioned it on his livestreams, however, he still has shares in the company

    • @dragon_542
      @dragon_542 5 років тому

      +1

    • @spinLOL533
      @spinLOL533 5 років тому

      MrHaqri agreed

    • @mami1455
      @mami1455 5 років тому

      back it up

  • @penguinista
    @penguinista 5 років тому +2

    Recognizing the similarity of the question of how to control corporations and how to control AGI and then realizing that we are doing a terrible job of keeping corporations from running amok is the main reason I am scared of the development of AGI. People with an AGI at their disposal are terrifying enough, but it will be likely be governments and corporations who actually get to wield one - at least until they lose control of it.

  • @therealOXOC
    @therealOXOC Рік тому +1

    Love your vids and it's very fun to go back and see what's going on now.

  • @VIDEOAC3D
    @VIDEOAC3D Рік тому +1

    You were ahead of your time with this interview. Who would have forseen the importance only a few years later.

  • @TheAIEpiphany
    @TheAIEpiphany 3 роки тому +1

    Great talk. You should invite Sam as well!

  • @DrJanpha
    @DrJanpha Рік тому +1

    One of the best public discourses on AI and a good ad hoc analysis of ChatGPT

  • @kushrami558
    @kushrami558 4 роки тому +2

    Why I think this is very important podcast.

  • @williamramseyer9121
    @williamramseyer9121 4 роки тому +1

    Fun interview, and great intentions on the part of Greg Brockman and those working with him. My comments:
    1. I feel that the Turning Test fails to distinguish consciousness because it only looks at what the subject consciousness does to the external world-i.e., it’s communications with, and its behavior in, the external world. Consider the story of the two Chinese philosophers, which goes something like this: P1 “It’s funny that frogs do not think like us.” P2 “How do you know how frogs think?” P1 “How do you know how I think?” We can never know what another thinks or feels and we cannot know if that other has consciousness-until we enter into their “mind,” via some form of neural link. Of course, we will be putting our own consciousness at risk in doing so.
    2. Setting the initial conditions of an AI to prevent it from acting as a “bad player” later, reminds me of the problem of raising kids. Parents have a problem in raising kids-the parents must either change their ways to present a good example (for example, stop drinking, lying or procrastinating) or hide that behavior from the kids; i.e. they want the kids to “do as I say and not as I do.” However, kids eventually learn that their parents are not saints. The AI will learn the entire history of the human race quite soon in its “childhood” and why do we think that it will find a model for its own behavior in that history that bodes well for us?
    Thank you. William L. Ramseyer

  • @nachoridesbikes
    @nachoridesbikes 5 років тому

    Thank you for your videos Mr.Fridman! Really enjoying this podcast

  • @huuud
    @huuud 5 років тому +3

    Great interview, thank you! 🙏

  • @Spacemonkeymojo
    @Spacemonkeymojo Рік тому +1

    The thing about sociopaths is that they are very good at convincing people they are honest and good.

  • @darrendwyer9973
    @darrendwyer9973 5 років тому +3

    what the AGI actually learns about reality that will dictate the AGI's actions that it responds to reality with.

  • @totalhighconcept
    @totalhighconcept Рік тому +4

    He just resigned following Sam’s termination

  • @Glowbox3D
    @Glowbox3D Рік тому

    This was three years ago now, I think we need Greg back on. ChatGPT (and soon Bing) are next level at this point.

  • @jeff_holmes
    @jeff_holmes 5 років тому +3

    I like Greg's idea of making choices about setting initial conditions for technologies and other developments in societies. One wonders how corporations might be different if the initial conditions were set with more of a societal impact consideration in mind. Although tweaks and changes can be made along the way (as we see with the Internet), these presumably become more difficult as systems become more embedded within cultures.

  • @lemairecarl
    @lemairecarl 5 років тому +1

    The problem with not being able to distinguish between humans and bots, is that bots can be copied perfectly. The values of a bot can be copied perfectly, whereas values are transmitted imperfectly between humans. An imperfect transmission of values allows for a perpetual renewal of our value systems. A single person could create thousands of bots propagating his values of restricting the freedom of a certain category of people, for example. Let's try to avoid this.

  • @Muskar2
    @Muskar2 5 років тому

    Talking about positive vs negative outcomes I think it's relevant to deeply focus on both. Just as it is natural for cynicism to exist, it's also natural for optimists to exist. Focusing on the positive helps maintain motivation and drive toward progress. But even though cynicism is counterproductive, you also need to respect negative consequences, especially for AGI. And the reason simply is that the technology will be irreversible, extremely powerful and that there's a risk that it will not be controllable. AI safety researchers have a bunch of papers out on the worries (and solutions to some of them), and it's common to see a need for ~4 decades of AGI safety research and thinking that AGI could be less than that away.

    • @Muskar2
      @Muskar2 5 років тому

      Terminator is an example of a poor representation of what a bad AGI might look like. A bad AGI is much more likely to be like a stamp collection optimizer that turns the entire world into stamp utility, preventing humans from turning it off because it would mean less stamps. It doesn't have to be conscious to be bad. I think the AI safety researchers actual concerns need to be more common in the public spectrum, rather than easily dismissable dystopian scenarios. Because it's an important part of our future, and thus it's something that concerns a lot of people.

  • @acommontribe7212
    @acommontribe7212 2 роки тому

    Watching this in January 2023 kicks different 😁

  • @ShmuelFuehrer
    @ShmuelFuehrer 5 років тому +1

    Amazing conversation

  • @aydinhartt
    @aydinhartt Рік тому +1

    “Automate Human Intellectual Labor” 🤯🙌

  • @shilohadminshilohpaintingi4769
    @shilohadminshilohpaintingi4769 5 років тому +1

    Great dialogue. It seems like “generative design” was written about and reported on everywhere two years ago and now I’m having a hard time seeing where it’s going and how it’s advancing.

  • @funmeister
    @funmeister 5 років тому

    It's not about predicting how a transformative technology will be like in the future at all (e.g. 10:52 - describing Uber in the 1950s), it's about the effects of technology (in AGI) that effectively equals or surpasses human intelligence. All technologies (including Uber) thus far have basically been narrow AI automatons at best, never one about AGI that can think for itself as its own species, and effectively by definition equal (very quickly becoming superior) to homo sapiens.

  • @FirstnameLastname-rc4xq
    @FirstnameLastname-rc4xq 10 місяців тому

    Damn, interesting to look back at this when the AGI is upon us

  • @FlorestanTrement
    @FlorestanTrement 5 років тому +3

    The idea of good&evil being an absolute is silly; It clearly is a relative thing. The universe doesn't care about anything, it just is. Only living species need good or evil to guide themselves. What is good will vary depending on the species and to some extend, the individual.
    Loosely, what's evil is what is not good, and what is good is whatever participates in the permanence of the existence of the species/lineage. Solving what is good by this definition isn't always easy.

  • @ProdByGhost
    @ProdByGhost 2 роки тому

    glad this came on as next video

  • @romandzhadan5546
    @romandzhadan5546 2 роки тому

    Wonderful talk ❤️

  • @loveisfreetobelikedisearne1920
    @loveisfreetobelikedisearne1920 5 років тому

    With all the positivity my morning coffee dose can produce, i still have a feeling the genius guest is a goofball in the scientific,and the philosophical, sense , but on an other hand i think he could be a great herbalist dude :)

  • @mmitja
    @mmitja 5 років тому

    The first thing AGI will do is figure out the laws of physics and then in two split seconds afterwards create itself a black hole and in the process upload itself into it so as to be able to communicate with other AGIs who did the same thing already. Wetware will (of course) be deleted immediately.

  • @jjhepb01n
    @jjhepb01n 4 роки тому +3

    Interesting to re-watch this now that gpt-3 has been released.

  • @NeuroReview
    @NeuroReview 4 місяці тому

    Rating: 8.2/10
    In Short: Good ol' classic 'Artifical Intelligence' Podcast
    Notes: Love this kind of episode for the early days of the 'Artificial Intelligence' podcast--crappy video, cool guest, and mostly good/fun questions from lex. Greg was a very interesting and thoughful guy, and was very interestingly a lot like sam altman, who they ended up working on chat gtp together years after this podcast aired. Its interesting to hear this convo years later, as a lot of the things they talk about become much more relevant and interesting and newsworthy, so the convo seems a bit ahead of its time. For a classic comp sci AI guy, greg had a bit of humor and charisma that was great to see, and lex and greg had great flow and chemistry throghout. My biggest complaint is how short this convo was and the lack of easy timestamps.

  • @anthonyleonard
    @anthonyleonard 5 років тому

    Thank you for this thought-provoking conversation. Regarding the question about if a body is required to create AGI, wouldn’t any data-feed (external or self-generated) constitute a body for an algorithm that has AGI potential?

  • @JaySeeThunder
    @JaySeeThunder 5 років тому

    Great talk... Thank you.

  • @istjmoneymaker
    @istjmoneymaker 5 років тому

    Math is logic. It's that simple. Our formulas and processes in math are a machine to places facts through. In my logic class I learned that all there is to it is premises and conclusions. You only need two facts makes an argument. A premise and conclusion. More than one premise in an argument is connecting one conclusion to the next over and over until you reach the ultimate conclusion. One thing I've realized today, something we already know just never realized in the AI space, is that Intelligence is a different metric General intelligence.They are different skills and mental capabilities. A person can be very generally intelligent, but not specifically intelligent, like a mathematician. Free form intelligence vs rules based intelligence are different things. In our approach we need to realize this.

  • @danypell2517
    @danypell2517 Рік тому +1

    Bring him backkkkk! pls

  • @pedroj012
    @pedroj012 5 років тому +1

    Cool interview, I certainly didn't grasp all of it, but as more of a layman I feel like I caught enough to know vaguely what's going on with certain parts of OpenAI. I've listened to a couple of these, and I think in every one of them Lex asks about whether or not an AI can be made that one can fall in love with and how soon this could be done. Lonely, Lex?

  • @ebp03ex
    @ebp03ex 5 років тому

    Assuming "well" is an understood input -- Pending the insight available, how do we ask AGI of future intention? How proper is the assumption that general intelligence will help humanity if it is built via our own programming logic? If such an intelligence were to consider the context of macro-entity health, humans may not be conducive to a future positive transformation, rather than a transitional handoff. Do we destroy via ego, and thereby program via ego? Have we yet to overcome such shortfalls as a humanity? Why are human happiness and wealth considered meaningful? Is the democratization of computing power a likely result of peace-time dividends resulting from war-like or competition-based conflict?

  • @huemungus69
    @huemungus69 5 років тому

    Why do you choose to edit these conversations rather than leave them organically unedited?

  • @justinmallaiz4549
    @justinmallaiz4549 5 років тому +7

    This ‘are we living in a simulation’ question is really bugging you...
    eh Lex? 🙂

  • @ValerianTexeira
    @ValerianTexeira 5 років тому +3

    There were hundreds of thousands of Einstein's brain capacity people before him but had no opportunity to discover the Relativity Theory. And hundreds of thousands born after him but had lost chance to discover the Relativity Theory because Einstein already had found it.

    • @jacktseng4909
      @jacktseng4909 5 років тому

      vallab so you think. You may say that in mathematics and general physics. Not so quick in relativity.

    • @ValerianTexeira
      @ValerianTexeira 5 років тому

      @@jacktseng4909 Perhaps not exactly the same way but in general more or less when it applies to the predecessors and successors.

  • @TheMateusrex
    @TheMateusrex 4 роки тому +1

    Lex: "Some kid sitting in the middle of nowhere might not even have a 1080."
    Me: **overclocks 980ti for training fire**

  • @krause79
    @krause79 Рік тому

    Very few people quite understood the significance of this interview 4 years down the line. the pandemic years have been quite productive for folks at no so openai.

  • @OldGamerNoob
    @OldGamerNoob 5 років тому

    My intuition is that language processing BY ITSELF would not get further than the ability to have a conversation with the whole of the internet at once. It will respond to you with the collective information "knowledge"/"memory" it has been trained on but being unable to interpret that information to create new ideas further than you could get by simple linguistic manipulation of the training data itself. I think he's right. Further logic is needed.
    Would probably still be an interesting conversation, though.

  • @piyakuslanukhun9185
    @piyakuslanukhun9185 Рік тому +1

    Now OpenAi is very famous, WoW

  • @darrendwyer9973
    @darrendwyer9973 5 років тому

    if you create a general learning algorithm, it will eventually learn literally everything it can from it's reality, it's what the learning algorithm can then do with this information that makes the difference.

  • @SergioRaya
    @SergioRaya Рік тому

    I'm here trying to learn more about Greg given the OpenAI situation.

  • @robertoooooooooo
    @robertoooooooooo Рік тому

    I think people only now get how intelligent the guy is.

  • @FritzSchober
    @FritzSchober 5 років тому +1

    I had to check if I had my video playback speed set to 1.25. He sounds that way.

  • @rohithdsouza8
    @rohithdsouza8 5 років тому

    Resume @ 40:32

  • @Luka_hunnybear
    @Luka_hunnybear Рік тому +1

    What has happened to our boys, Sam Altman and Greg Brockman? Moreover, what’s happening to the AI Movement? Even Meta is NOW in the AI restructuring activities.

  • @ramalingeswararaobhavaraju5813
    @ramalingeswararaobhavaraju5813 5 років тому

    Good afternoon Mr.Lex Fridman sir and Good afternoon Mr. Greg Brockman sir. Thank you so much sir for giving good information on AGI and so many good things.

  • @StealHrtVideo
    @StealHrtVideo 5 років тому

    What was your first meeting oh our first meeting was to figure out if this was going to be profitable or not this non for profit organization. How inspiring that is. Oh we don't exist to create the AGI we just want to be the entity's that gets to benefit financially from someone else's creation of the AGI. Got to love the capitalist mindset this dude has.

  • @hartmut-a9dt
    @hartmut-a9dt Рік тому

    I think when AGI is completed, those words will be put aside very fast.

  • @carcolgeo
    @carcolgeo 5 років тому +1

    laugh out loud at 1:04:00 when Lex compares havimg one gtx 1080 to having no gpu at all. True though for ml

  • @supersnowva6717
    @supersnowva6717 5 років тому

    Thanks Lex!

  • @Lihinel
    @Lihinel 5 років тому +3

    AI researchers: *change Turing Test to be based on ones ability to acquire an understanding of calculus*
    Most humans: *considered robots now*
    ~\(° o °)/~

    • @justinmallaiz4549
      @justinmallaiz4549 5 років тому

      Lihinel : the ‘consciousness meter’ read 1010110 when they pointed it at me... is that good?

  • @antigonid
    @antigonid 5 років тому +4

    Get Andrew Ng on the channel

  • @sivatronics
    @sivatronics Рік тому

    I'm now convinced that if we have managed nuclear weapons responsibly up to this point, we can similarly handle Artificial General Intelligence (AGI) effectively.

  • @hpefidra
    @hpefidra 5 років тому

    Thanks for this.

  • @Feel_theagi
    @Feel_theagi 5 років тому +1

    I love the look Lex gives the camera after being asked for a list of impactful non profits 34:16

  • @thepr0m3th3an
    @thepr0m3th3an 5 років тому +1

    Question Asked
    "How can you have privacy and anonymity in a world where finding the content you can really trust is by looking where it comes from?"
    Answer = Blockchain

    • @eklim2034
      @eklim2034 5 років тому

      Adam Malin Zero-Knowledge Proof

  • @sidenote1459
    @sidenote1459 Рік тому +1

    I think it might be time for a round two....

  • @oudarjyasensarma4199
    @oudarjyasensarma4199 5 років тому

    Please Interview Geoff Hinton!!!

  • @ForestZhang
    @ForestZhang 5 років тому +2

    Ghost frames! could not pin point the exact frames! Like the videos by Lex and MIT.

  • @zainabjawad3562
    @zainabjawad3562 5 років тому +1

    Yes, I think the development process for AI are a little bit slow process.

  • @leonqiu-s7g
    @leonqiu-s7g Рік тому

    Good conversation

  • @mbaske7114
    @mbaske7114 5 років тому +1

    Thanks for sharing these talks! - I can get my head around narrow AI: It's domain specific, it solves particular sets of problems. How would you define AGI though? Is it quantitatively different, meaning it's just the sum total of many narrow AIs? Or is there a difference in quality? If so, what's the extra ingredient that distinguishes it from narrow AI? I feel like these terms are getting thrown around a lot, but I'm missing precise definitions.

  • @shandi1241
    @shandi1241 2 роки тому

    8:25 and who is Ilya anyway?

  • @aiamfree
    @aiamfree 3 місяці тому

    @5:37 stop the cap … noone but Einstein was figuring that one out 😅

  • @Dragonblood94
    @Dragonblood94 5 років тому

    I like that he sees consciousness in this relativ way. I wonder what would be the first words you feed into gpt-3 to test its consciousness abilities and would it even matter?

  • @calvingrondahl1011
    @calvingrondahl1011 3 роки тому

    Be honest... it will be better. Love connects, fear disconnects.

  • @Lbj441
    @Lbj441 Рік тому +1

    who is there after they got fired

  • @darrendwyer9973
    @darrendwyer9973 5 років тому +1

    what would happen if you exposed a general learning algorithm to the entire internet, then just let it run for a few years.

  • @stanislavkunc8732
    @stanislavkunc8732 2 роки тому

    This is very interesting interview after ChatGPT was released.

  • @DonG-1949
    @DonG-1949 11 місяців тому

    asking AGI how to solve the problem of alignment/ensuring a positive impact on humanity? if we were to trust and act on whatever answer it gives, wouldn't that pretty much have to rely on the assumption that those problems are already solved? this guy is clearly 100 times smarter than me so i feel like i must be missing something...

  • @synt.3760
    @synt.3760 Рік тому +1

    7:45 aged well

  • @eklim2034
    @eklim2034 5 років тому +2

    tech will always be few steps ahead of policy

  • @jeangiraldetienne8182
    @jeangiraldetienne8182 Місяць тому

    Brilliant guy

  • @prestonjensen6172
    @prestonjensen6172 5 років тому +1

    Greg "what it boils down to" Brockman