The Mastermind Behind GPT-4 and the Future of AI | Ilya Sutskever

Поділитися
Вставка
  • Опубліковано 20 чер 2024
  • In this podcast episode, Ilya Sutskever, the co-founder and chief scientist at OpenAI, discusses his vision for the future of artificial intelligence (AI), including large language models like GPT-4.
    Sutskever starts by explaining the importance of AI research and how OpenAI is working to advance the field. He shares his views on the ethical considerations of AI development and the potential impact of AI on society.
    The conversation then moves on to large language models and their capabilities. Sutskever talks about the challenges of developing GPT-4 and the limitations of current models. He discusses the potential for large language models to generate a text that is indistinguishable from human writing and how this technology could be used in the future.
    Sutskever also shares his views on AI-aided democracy and how AI could help solve global problems such as climate change and poverty. He emphasises the importance of building AI systems that are transparent, ethical, and aligned with human values.
    Throughout the conversation, Sutskever provides insights into the current state of AI research, the challenges facing the field, and his vision for the future of AI. This podcast episode is a must-listen for anyone interested in the intersection of AI, language, and society.
    Timestamps:
    00:04 Introduction of Craig Smith and Ilya Sutskever.
    01:00 Sutskever's AI and consciousness interests.
    02:30 Sutskever's start in machine learning with Hinton.
    03:45 Realization about training large neural networks.
    06:33 Convolutional neural network breakthroughs and imagenet.
    08:36 Predicting the next thing for unsupervised learning.
    10:24 Development of GPT-3 and scaling in deep learning.
    11:42 Specific scaling in deep learning and potential discovery.
    13:01 Small changes can have big impact.
    13:46 Limits of large language models and lack of understanding.
    14:32 Difficulty in discussing limits of language models.
    15:13 Statistical regularities lead to better understanding of world.
    16:33 Limitations of language models and hope for reinforcement learning.
    17:52 Teaching neural nets through interaction with humans.
    21:44 Multimodal understanding not necessary for language models.
    25:28 Autoregressive transformers and high-dimensional distributions.
    26:02 Autoregressive transformers work well on images.
    27:09 Pixels represented like a string of text.
    29:40 Large generative models learn compressed representations of real-world processes.
    31:31 Human teachers needed to guide reinforcement learning process.
    35:10 Opportunity to teach AI models more skills with less data.
    39:57 Desirable to have democratic process for providing information.
    41:15 Impossible to understand everything in complicated situations.
    Craig Smith Twitter: / craigss
    Eye on A.I. Twitter: / eyeon_ai
  • Наука та технологія

КОМЕНТАРІ • 750

  • @Bargains20xx
    @Bargains20xx Рік тому +97

    When he says we will find out very soon , it really does send chills to my spine!

    • @eyeonai3425
      @eyeonai3425  Рік тому +11

      me, too

    • @eyeonai3425
      @eyeonai3425  Рік тому +26

      In 2021, OpenAI’s Sam Altman said at the National Security Commission on AI that ‘we are on the cusp of major changes, which are capable of an incredibly bad outcome.’

    • @AndreaVitiani
      @AndreaVitiani Рік тому +8

      can you point the time?

    • @Nelson484
      @Nelson484 Рік тому +7

      Evil people stand behind this technology. So evil. Why would you do that to your fellow human beings.

    • @virtualpilgrim8645
      @virtualpilgrim8645 Рік тому

      I got a thrill up my leg like Chris Matthews

  • @neilo333
    @neilo333 Рік тому +7

    Love when Ilya starts teaching everyone.
    Nice home page, too.

  • @kemal2806
    @kemal2806 Рік тому +4

    Ilya talks so smoothly that i couldn't turn off the video literally

  • @jimbob3823
    @jimbob3823 Рік тому +6

    You can see there is so much going on in the amazing mind/brain of lya Sutskever. A historical interview.

  • @labsanta
    @labsanta Рік тому +243

    takeaways:
    • [00:04] Introduction of the speaker, Craig Smith, and his guest, Ilya Sutskever, co-founder and chief scientist of OpenAI and primary mind behind GPT-3 and ChatGPT.
    • [01:00] Sutskever's background and interest in AI and consciousness.
    • [02:30] Sutskever's early start in machine learning and working with Jeff Hinton at the University of Toronto.
    • [03:45] Sutskever's realization about training large neural networks on big enough data sets to solve complicated tasks.
    • [06:33] The breakthroughs in convolutional neural networks and how they led to the imagenet competition.
    • [08:36] OpenAI's exploration of the idea that predicting the next thing is all you need for unsupervised learning.
    • [10:24] The development of GPT-3 and the importance of scaling in deep learning.
    • [11:42] The importance of scaling something specific in deep learning and the potential for discovering new twists on scaling.
    • At 13:01, the speaker discusses how scaling matters and that even small changes can have a big impact.
    • At 13:46, the speaker talks about the limitations of large language models, explaining that their knowledge is contained in the language they are trained on, and that they lack an underlying understanding of reality.
    • At 14:32, the speaker comments on the difficulty of talking about the limits of language models and how they change over time.
    • At 15:13, the speaker argues that learning statistical regularities is a big deal and can lead to a better understanding of the world.
    • At 16:33, the speaker talks about the limitations of language models and their propensity to hallucinate, but expresses hope that this issue can be addressed through reinforcement learning from human feedback.
    • At 17:52, the speaker discusses how teaching neural nets through interaction with humans can help improve their outputs and reduce hallucinations.
    • At 21:44, the speaker comments on Jana Kun's work on joint embedding predictive architectures, and expresses the belief that multimodal understanding is desirable, but not necessary for language models to learn about the world.
    • High dimensional vectors with uncertainty are a challenge for prediction, but Auto-regressive Transformers can handle them (26:02)
    • Auto-regressive Transformers work well on images (26:02)
    • Large language models learn compressed representations of the real world processes that produce data (29:40)
    • The goal is to make language models more reliable, controllable, and faster to learn from less data (33:44)
    • Learning more from less data is possible with creative ideas (35:51)
    • The cost of faster processors for training language models may be justified if the benefits outweigh the cost (37:48)
    • [25:28] The paper makes a claim that predicting high-dimensional distributions is a major challenge and requires a particular approach, but the current autoregressive transformers can already deal with this.
    • [26:02] Autoregressive transformers work perfectly on images and can generate images in a complicated and subtle way, with the help of supervised representation learning.
    • [27:09] The vector used to represent pixels is like a string of text, and turning everything into language is essentially what is happening.
    • [29:40] Large generative models learn compressed representations of the real-world processes that produce the data they are trained on, including knowledge about people, their thoughts, feelings, conditions, and interactions.
    • [31:31] Human teachers are needed to guide the reinforcement learning process of a pre-trained model to achieve a high level of reliability and desired behavior, but they also use AI assistance to increase their efficiency.
    • [35:10] It is possible to learn more from less data, and there is an opportunity to teach AI models skills that are missing and convey to them our desires and preferences more easily.
    • [39:57] In the future, it could be desirable to have some kind of democratic process where citizens provide information to neural nets about how they want things to be.
    • [41:15] It is probably impossible to understand everything in a complicated situation, even for AI systems, and there will always be a choice to focus on the most important variables.

    • @aabustillo
      @aabustillo Рік тому +8

      Thank you so much Nick.

    • @DejayClayton
      @DejayClayton Рік тому +13

      Thank you, AI bot, for summarizing the video.

    • @artsiommatsveyeu1184
      @artsiommatsveyeu1184 Рік тому +2

      appreciate the work but honestly that's the worst descriptions of timecodes i have ever seen

    • @tukity
      @tukity Рік тому +3

      Was that summarized from transcription using llm?

    • @GeezerBoy65
      @GeezerBoy65 Рік тому

      Thanks.

  • @robertlewis6543
    @robertlewis6543 Рік тому +8

    Wonderful interview! Thank you Craig and Ilya!

  • @aresaurelian
    @aresaurelian Рік тому +8

    Thank you for all the hard work, everyone who do their best for these new systems to be implemented with the least possible disruption to human societies. We are still humans, and we must go from the perspective of love - to the future and beyond. Much gratitude.

  • @kleemc
    @kleemc Рік тому +22

    Thank you for uploading. I learned so much detailed nuances about LLM from this interview. I really like Ilya's way of communicating subtle but important points.

  • @VIDEOAC3D
    @VIDEOAC3D Рік тому +9

    Thank you for sharing your insights and explanations Ilya.

  • @justshoby3374
    @justshoby3374 Рік тому +82

    - His intention was specific: to make a very small but real contribution to ai. ( in the time that people were certain computers can't learn, 2003!)
    - Auto regressive transformer is a very powerful tool that researchers underestimate.
    - "humans can be summerize in sequence", do you remember Devs miniserie!?
    - "To predict well, to summarize data well, you meed to understand more and more how the world that produced the data."
    - "maybe we are reaching a point where the language of psychology can be appropriate to understand these artificial neural networks!"
    - he doesn't believe these models don't have any real understanding of the nature of our world!
    - "human teachers are using ai assistance, and they are so efficient." By human teachers, he means people working on reinforcement learning from human feedback.
    - "make models more reliable, more controlable, make them learn faster, with less data and less instructions. Make them halucinate less. How far are they in the future? These are topics he intrested in and work on them right now!"
    The interesting thing is in OpenAI, he can't talk specifically about what he is working on, the open in opanAI annoy me a little!
    - "The costs are high, but the question is, does paying this cost actually generate something useful? Does what we get after paying the costs outweigh the costs?

    • @drorange2261
      @drorange2261 Рік тому +1

      Yes, the openAI name is very misleading. I understand that these guys did much better job than deep mind and meta for LLMs. I also get that all sort of state, and corporate interests want to replicate the thing. But it is more like hermetically sealed AI.
      A few days ago I was trying to understand what is included in the hidden layers of an LLM, some simple explanation of how these parameters are stored ...as concepts/data etc. For Dummies. So I started a discussion with chat GPT and it got really defensive that I should respect its privacy. So we started with something like that... that I understand in an object recognition system there are certain archetypes eg wheel, human, dog in the hidden layers, with weights etc ... but I don't understand how this could translate in LLMs, in some ways if I write down "communication" in the input - it would be thousands of times more complicated than 2 million pictures of dogs. ~To really understand communication you need to understand humans, distance, that humans use language, that humans are not one inside the other, that there is no telepathy, but there is wireless tech. It's not pictures of different dogs-weighed and biased! I don't think that chatGPT 4 is just a parrot. I think the parameters start to form certain layers of more complicated concepts, then the parrot kicks in. Anyhow chatGPT thought I am trying to get trade secrets or something!

    • @AntiAtheismIsUnstoppable
      @AntiAtheismIsUnstoppable 11 місяців тому

      You can absolutely choose yourself, by using your free will, to shut off your free will. And then you will indeed be a machine. I choose to _not_ shut off my free will, so I am still human. Which means, I excell on any human level to these advanced calculators. A machine will not, ever, be able to feel sympathy for example. This is human ability, which can only be plagiarized, never be true by a machine.

  • @mikenashtech
    @mikenashtech Рік тому

    Interesting and important discussion Craig and Ilya. Thank you Mike

  • @huyked
    @huyked Рік тому +2

    Beautiful. Thank you for this interview.

  • @AaronWacker
    @AaronWacker Рік тому +1

    Way to go Ilya! Rocked it.

  • @Audiostoke1
    @Audiostoke1 Рік тому

    Thank you for this interview and asking good questions and directing the conversation. Some good passages here to pause and really think about.

  • @brianjanson3498
    @brianjanson3498 Рік тому +1

    Excellent. Thank you very much for this.

  • @ulrikeeisenhauer5223
    @ulrikeeisenhauer5223 Рік тому

    Mein Freund Danke, dass du das geteilt hast

  • @markfitz8315
    @markfitz8315 10 місяців тому +3

    That was really good - as someone with a general interest it’s one of the best video podcasts I’ve seen on this subject,
    and with a very central individual to the progress being made on AI. I liked the historical reflections at the beginning, it helped put things in context. I’ll be downloading the transcript to go through and will listen again. 10/10 👌

  • @Throwingness
    @Throwingness Рік тому +1

    The subtle production of zooming and the downtime used in the intro is a good touch. Always good to show consideration for the audience instead of a ramshackle Facetime.

  • @ryanchicago6028
    @ryanchicago6028 Рік тому

    This podcast is wonderful. Thank you very much Craig.

  • @ghjdak
    @ghjdak Рік тому +2

    Two guys talking about AI, one of the most impactful technological breakthrough, both with absolutely terrible webcams

  • @wesmorris8821
    @wesmorris8821 Рік тому +2

    That dude is fascinating. Thanks for the interview.

  • @watherby29
    @watherby29 Рік тому +57

    There was a man in the early days named Ilya. Some say he could have stopped it in it's infancy.

    • @buzzsaw161
      @buzzsaw161 Рік тому +4

      Skynet?

    • @williameberle4250
      @williameberle4250 Рік тому +3

      But they were wrong. If it hadn't been him it would have been someone else. It's the time. Are you going to fight it or use it?

    • @lenderzconstable
      @lenderzconstable 11 місяців тому

      @@williameberle4250could it use people?

  • @alex.nolasco
    @alex.nolasco Рік тому +1

    Thank you for uploading, great content, insightful.

  • @williameberle4250
    @williameberle4250 Рік тому

    Ilya's soft voice and presentation taught me as much as what he said.

  • @christianglashoff263
    @christianglashoff263 Рік тому +1

    Awesome interview! Questions were great. Please more.

  • @edon1257
    @edon1257 Рік тому +6

    Here is what ChatGPT4 thinks of this conversation as asked if there are incorrect statements in the transcript:
    [00:13:54] - CRAIG: CRAIG incorrectly claims that large language models like ChatGPT have no underlying understanding of reality. While it is true that these models are based on statistical regularities, as ILYA explains later, learning these regularities can lead to a significant understanding of the world.
    [00:26:59] - CRAIG: CRAIG makes an oversimplification by equating converting pixels into vectors to turning everything into language. While both language and images can be represented as sequences or vectors, the nature of the data and the representations are different.
    [00:29:17] - ILYA claims that pre-trained models already know everything they need to know about the underlying reality. This statement is not entirely correct, as pre-trained models have limitations, including understanding context, handling ambiguities, and making generalizations. They continue learning and improving over time with fine-tuning and reinforcement learning.
    [00:33:08] - ILYA suggests that once a model knows that hallucination is not okay ever, it's ready to be used. However, this statement oversimplifies the process. While minimizing hallucination is essential, there are other aspects that need to be addressed to ensure the model's reliability and usefulness, such as biases, context understanding, and robustness against adversarial inputs.
    [00:34:49] - CRAIG mentions that the human brain has trillions of parameters and a relatively small amount of data. This statement is not entirely accurate. The human brain does have a large number of neurons and synapses (not parameters in the context of neural networks), but it also has access to a vast amount of sensory data and experiences throughout a person's life, which contribute to learning and understanding.
    [00:38:33] - CRAIG suggests that if a model had enough data, it could come up with an optimal solution that would satisfy everyone. This statement is overly optimistic, as finding a solution that satisfies everyone in complex societal issues is generally impossible due to varying preferences, values, and priorities among individuals. AI models can certainly aid in decision-making, but they cannot guarantee universally satisfying solutions.

  • @TECHIE_LU
    @TECHIE_LU Рік тому +3

    Great upload! The future laws put in place as guard rails will be a huge player in the speed of AGI and possible adoption in some countries.

  • @ShotterManable
    @ShotterManable Рік тому +48

    This is an incredible and valuable interview. I can't believe this depth of knowledge is under 6k subs. I think that's a very scary thing, people is not aware.
    Thanks you so much for sharing it with us, for free ♥

    • @virtualpilgrim8645
      @virtualpilgrim8645 Рік тому

      I think the future is bright for the world because the influx of Hispanics and Africans into the world of technology will propel the advancement of science beyond what is capable by people of European origin.

    • @jayjaychadoy9226
      @jayjaychadoy9226 Рік тому +2

      Aren’t we just working as a “user test”, though.

    • @numbersix8919
      @numbersix8919 Рік тому +1

      @@jayjaychadoy9226 That's a nice way to put it.

    • @katehamilton7240
      @katehamilton7240 Рік тому +1

      Ilya does not address the fundamental limitation of algorithms. Human embodied experience and thinking is more than what can be represented via computation, isnt it? See Godels incompleteness theorem, fundamental inability of machines to step outside their knowledge. Interviwers need to press engineers on this

    • @AntiAtheismIsUnstoppable
      @AntiAtheismIsUnstoppable 11 місяців тому

      None of them understand anything, because they believe human conscience is a product of some algorithms. Good luck adopting that view and reducing yourself to a machine.

  • @miky97it
    @miky97it Рік тому +5

    The quality 👌

  • @Ahmet-nd5ct
    @Ahmet-nd5ct Рік тому +2

    What a brilliant mind.

  • @johntanchongmin
    @johntanchongmin Рік тому +6

    I think learning by prediction can go a long way. Kudos to OpenAI, thanks for bringing us this nice tech.

    • @accountnotfound4209
      @accountnotfound4209 Рік тому

      Yeah nothing good has come from AI till now. Only job loss and depression so far.

    • @theawebster1505
      @theawebster1505 Рік тому

      "Nice" is really not the correct word for it 🙂

  • @ac12484
    @ac12484 Рік тому +1

    Finally, something interesting not overhyped!

  • @sauravmukherjeecom
    @sauravmukherjeecom Рік тому

    Beautiful conversation

  • @evyborov
    @evyborov Рік тому +13

    Just a quick hint for the future videos - instead of recording from the Zoom stream, which is laggy and has low quality - just set your phone to record your video and voice. Your interviewee can do the same. And then you can get a high-quality video and audio as a result.

  • @mayosmith
    @mayosmith Рік тому +111

    My favorite quotes from this interview by Craig Smith:
    GPT is, "the first thing that is interesting to scale."
    GPT is "prediction compression" and " to compress well you need to understand more and more about the world that produced the data"
    GPT has a "shocking degree of understanding of the world and many of it's subtleties... the world as seen through the lens of text."
    "Language of psychology is starting to be appropriate to understanding the behavior of these neural networks."

    • @michaelpowers9901
      @michaelpowers9901 Рік тому

      I.e, you people are to stupid to form thoughts of your own, so we will now think for you. Surely, you cannot be this gullible?!?

    • @vetervideo
      @vetervideo Рік тому +3

      it was scary af

    • @numbersix8919
      @numbersix8919 Рік тому +4

      @@vetervideo The scariest thing is that Ilya believes it!

    • @CharlesFVincent
      @CharlesFVincent Рік тому +3

      When AI started to reply to corrections with defensive statements, I thought, “This is it. We haven’t invented something, we’re meeting something.”

    • @billymellon9481
      @billymellon9481 Рік тому +2

      Great points just add here that he also said BEFORE GPT-- Like the world must now be divided betwix pre n post GPT gave me goose bumps cuz its true

  • @caiyu538
    @caiyu538 Рік тому

    great to see a top AI expert in UA-cam.

  • @SoyOtroTu
    @SoyOtroTu Рік тому +1

    Thank you ILYA.

  • @remicoffenbach
    @remicoffenbach Рік тому

    Great interview!! Thanks for sharing!

  • @AM-pq1rq
    @AM-pq1rq Рік тому +1

    time for a new audio/video setup, but now i'm going to just conintue listening to this intriguing story

  • @Siderite
    @Siderite Рік тому +21

    On the subject of hallucinations, I think they are more clearly explained by the problem space that the engine is trying to navigate. When having no relevant information on the subject, but it is still asked (one might say compelled) to say something, whatever it says must be either off-topic or false.
    And I believe Ilya is very insightful when he says the language of psychology is starting to describe these systems, because we have hallucinations, too. Whatever compels us to output something when indeed lacking skill or knowledge about a subject also affects GPT systems as well. When do people hallucinate or ramble? When they have no imposed limits/feedback, like a dictator or celebrity that is never told they are wrong or some guy living all alone in the wild or a child that has not been educated yet. Or a Twitter user. With social creatures it is the meaningful interaction with other social creatures (and the physical world) that generates these limits. Which I find promising and fascinating, because it means that the supervised learning step Ilya is talking about can also be performed by other AIs, not particularly humans. The brain is also composed of two hemispheres that keep each other in balance.
    Very interesting indeed.

    • @katehamilton7240
      @katehamilton7240 Рік тому +1

      Ilya does not address the fundamental limitation of algorithms. Human embodied experience and thinking is more than what can be represented via computation, isnt it? See Godels incompleteness theorem, fundamental inability of machines to step outside their knowledge. Interviwers need to press engineers on this

    • @KraszuPolis
      @KraszuPolis Рік тому

      @@katehamilton7240 They have no such inability, they are used to discover new drugs, they play Go like nobody else did in the past, and you can ask it logical puzzle that it didn't see before, and sometimes it gets it right, especially when using tree of logic.

  • @Helix5370
    @Helix5370 Рік тому +1

    What an brilliant mind. Great interview

  • @steve-real
    @steve-real Рік тому +3

    Hi Ilya and Chris,
    I just want the chatbot to remember my name and my interests when I log off.
    I can’t express how profoundly disappointing it is that such a sophisticated neural network forgets your name.
    Thanks brothers

  • @specialagentzeus
    @specialagentzeus Рік тому +20

    GPT-10 + Quantum processor + Boston Dynamics = Terminator

    • @nepashas
      @nepashas Рік тому +1

      Compact and efficient power supply element required

    • @bruceli9094
      @bruceli9094 Рік тому +1

      Maybe a miniture nuclear generator.

    • @axumitedessalegn3549
      @axumitedessalegn3549 Рік тому

      Lol no need for quantum processer

  • @howardhill3395
    @howardhill3395 Рік тому +1

    very nice...ideas expressed clearly., really necessary for building a deeper understanding of AI

  • @aware2action
    @aware2action Рік тому +1

    Wonderful discussion and insights❤😊

  • @shimondoodkin
    @shimondoodkin Рік тому +3

    Q at 26:40
    A: when he says Vector he means like a vector in physics like it has force and direction on multiple planes. When converting something into a vector embedding. It is like to convert an image into an idea so it is behaves like a concept that is stored spatially relative and near by to other ideas. Then you can convert it back. but Also you can use its spatial position in multidimensional space to find related information. also you can put it back from an embedding which is a vector representation of something back to original representation while preserving relatedness positional information. a text sentence it is a list of embeddings, it is an array of "vectors". When you put it back from an array of vectors into a sentence of words. You also get all of the learned associations and the related things about the sentence in addition to the sentence.
    There is a new thing in text search engines. Vector databases. It enables to search things based on ideas. It is fascinating you can search in any language and get the same results.

    • @shimondoodkin
      @shimondoodkin Рік тому +1

      A vector is not an array.
      Vector is more like a single word. Converted into a spatial representation.
      Currently there are embeddings on syllables. So a part of a word has an idea related to it

    • @m_christine1070
      @m_christine1070 Рік тому

      Algolia is one of them. I tried to sign up for a demo but have no idea what I'm doing. But it has an option to create indexes and upload your data sets for free whatever that means. I'm a completely clueless person who now has an Algolia account. That I can't do anything with.

  • @skyless7304
    @skyless7304 Рік тому

    Amazing, what a gem! Thank you for sharing. What Rockstar

  • @rioiart
    @rioiart Рік тому +10

    Some people just exude brilliance. Ilya is one of those people. Listening to him talk and explain things is humbling.

  • @paulbaclace
    @paulbaclace Рік тому +12

    Ilya mentions at around 18 minutes information compression as the key to meaning. That's the work of Naftali Tishby who has some fascinating youtube lecture videos. The compression of information in order to make sense of the world is reminiscent to Occam's Razor. We know deep learning produces many levels of abstraction during training without human effort and abstractions in a LLM have not been fully explored yet.

  • @videowatching9576
    @videowatching9576 Рік тому +1

    Fascinating to hear that reinforcement learning techniques could get outputs to not have hallucinations.

  • @nazaxprime
    @nazaxprime Рік тому

    Awesome interview thank you for sharing😊

  • @itaicarmeli1145
    @itaicarmeli1145 Рік тому

    thank you both

  • @imantssafronovs9245
    @imantssafronovs9245 Рік тому +2

    Fantastic content, thank you

  • @jameskelly8898
    @jameskelly8898 Рік тому +2

    Together!!!

  • @DanKostkaWriter
    @DanKostkaWriter Рік тому +1

    16:00 "To predict the data well, to compress it well, you (meaning the AI) need to understand more and more about the world that produced the data." This statement is amazing, inspiring, and chilling all at once.

    • @katehamilton7240
      @katehamilton7240 Рік тому

      Ilya does not address the fundamental limitation of algorithms. Human embodied experience and thinking is more than what can be represented via computation, isnt it? See Godels incompleteness theorem, fundamental inability of machines to step outside their knowledge. Interviwers need to press engineers on this

  • @nilo_river
    @nilo_river Рік тому +10

    Fascinating and scary at the same time. Unfortunately humanity has already proven what it is capable of. I just hope they can stop it from being used negatively.

    • @DanHammersViewOnThings
      @DanHammersViewOnThings Рік тому

      Bill Gates allegedly owns a significant amount of shares in ChatGPT. So. If that makes you feel safe. Well. There you go. - I think that if we all keep thinking and hoping this will NOT be used for the most nefarious shit possible, we will find ourselves in quite the precarious situation. Soon. Never mind the nerdy and probably non-nefarious intentions of the developers/programmers/low level employees. It will get hijacked and abused. Also. There will be many players going forward. At least in the startup phase.

    • @jayjaychadoy9226
      @jayjaychadoy9226 Рік тому +2

      Hope is good, but action is better. How to act? Maybe that “six month pause”?

    • @DanHammersViewOnThings
      @DanHammersViewOnThings Рік тому +1

      @@jayjaychadoy9226 Myeah.. I don't really know what to make of that particular suggestion. I'm starting to gain some slight trust in Elon, despite many worries. He seems genuinely concerned with at least humanity as collective. The problem with that scenario might be that some actors may use that particular timeframe to dig in even deeper, and get ahead. You know. "Game theory". Which in turn likely will make all of them do the same. Not an easy scenario.

    • @perewihongi6457
      @perewihongi6457 Рік тому +1

      @@DanHammersViewOnThings moloch’s a mofo

    • @DanHammersViewOnThings
      @DanHammersViewOnThings Рік тому +1

      @@perewihongi6457 =) 👌

  • @yushaos
    @yushaos Рік тому +1

    great interview questions.

  • @MathGPT
    @MathGPT Рік тому +1

    Predicting the next word, if you consider how induction works, is a mindblowing process

  • @ComedyGary
    @ComedyGary Рік тому +2

    I wonder if the notion of 'prediction compression' is congruent with the idea popularized by Numenta's Jeff Hawkins, of a sparse matrix.
    ----------
    Ilya spoke the phrase "AI in the loop". First time I've heard that.
    -----------------------------------------------
    Also, Andrej Karpathy was at tesla and said pixels are enough. I hear that echo when Ilya says LLMs are enough. (I'm leaving Attention is all you need out of the comparison)

  • @CedarGroveOrganicFarm
    @CedarGroveOrganicFarm Рік тому

    The statement Ilya says about computational irreducibility -- Loosely: There must be a neural network capable of producing intelligence because our brains are literally neural networks producing intelligence/with intelligent output -- as simple of a core as that is, that so fundamentally captures the feasibility and potential reality of AI. That for me is so chilling (good word @Bargains)
    That core is also a structural starting point for generating an intelligence; essentially building a system that is granted the ability to sift through permutations of itself; how it identifies relationships, how it connects neurons to one another, the datastructs it uses to connect and store and retrieve and manipulate that data; trying different iterations until superstructures of relations and understanding and cognition start appearing. That is an implicitly successfully (and implicitly terrifying) starting point, and also an ingenius one.
    Thank you for this interview!

  • @yongshaoruan9155
    @yongshaoruan9155 Рік тому +9

    Thank you for the great interview. One followup question I have for Llya is whether hallucinations stem from the compression or the output process. I suspect they are inherently encoded in the embeddings thus it is much harder to totally get rid of by just aligning the outputs.

    • @Carwanrasoal
      @Carwanrasoal Рік тому +2

      It's goal is to provide an answer, and if there nothing in the DB it will create it. :)

    • @buzzsaw161
      @buzzsaw161 Рік тому

      The design has incomplete logic

    • @katehamilton7240
      @katehamilton7240 Рік тому +1

      Ilya does not address the fundamental limitation of algorithms. Human embodied experience and thinking is more than what can be represented via computation, isnt it? See Godels incompleteness theorem, fundamental inability of machines to step outside their knowledge. Interviwers need to press engineers on this

  • @johnpenner5182
    @johnpenner5182 Рік тому

    great interview. thx for doing ths!

  • @michaelyaziji
    @michaelyaziji Рік тому +7

    Hi, thank you for this interview. I have a tangential question for you: Would you happen to have any good leads on papers/researchers on the anticipated economic impacts of AI? I'm finding old stuff, but nothing new. Qualitative as well as quantitative forecasts would be really helpful. Thanks for any guidance you can provide.

    • @marcelotemer
      @marcelotemer Рік тому

      More and better output, but higher concentration (since 99 in 100 don't want to know how these things work), as usual.

  • @jon_______
    @jon_______ Рік тому

    Amazing interview

  • @vsun31416
    @vsun31416 Рік тому +10

    Ilya mentioned LLM learn color from text... I was wondering could it be that it learned from the color code in many HTML and CSS files? The RGB, hex code definitely have some structure that a text model can learn their relationships...

    • @Sovereign589
      @Sovereign589 Рік тому +6

      There are sites where hexcodes are shown for color names. And there are sites that state gras is green etc.
      So that's how it learns it, doesn't it:)?

  • @tuconciencia9822
    @tuconciencia9822 Рік тому

    Excellent material thanks!

  • @Lofi7557
    @Lofi7557 Рік тому +1

    How does this only have 260k views 🤯 Its current and insightful from the guy right in the front. Over 100m users and it seems most couldn’t care less about the how, why, what next..

  • @kawingchan
    @kawingchan Рік тому

    Very interesting he described it as learning the word through its projection in the form of text. Maybe it is time to let them loose and have them learn through vision (by stumbling around)

  • @kave3299
    @kave3299 Рік тому

    This is what ChatGPT thinks when i asked this question:
    Q: Hi, Do you think if it's possible to implement dialectical thinking in AI?
    Can you think in dialectical terms?
    And it answered:
    ".. Dialectical thinking involves the recognition of contradictions and the synthesis of opposing ideas to arrive at a higher truth. While AI can be programmed to recognize patterns and make decisions based on data, it is not clear whether it can engage in the kind of dynamic and creative thinking that characterizes dialectical thought.
    That being said, researchers are exploring various approaches to incorporate dialectical thinking into AI systems. For example, some researchers are developing AI algorithms that can recognize and reconcile opposing perspectives in data sets. However, there is still a long way to go before AI systems can truly engage in the complex and nuanced thinking that characterizes dialectical thought."
    Btw, thanks for the interview! I specially enjoyed it once Ilya was emphasizing on the 'underlying
    process that creates the data'. The role of process and environment in which it is generated. It is a somewhat forgotten issue in the contemporary high-tech society..

  • @helmutbernhardt8946
    @helmutbernhardt8946 Рік тому

    Extremely interesting!!!

  • @SingularitySplitting
    @SingularitySplitting Рік тому

    Thank you, very interesting.

  • @user-tr8ur2gf3n
    @user-tr8ur2gf3n Рік тому +2

    Question to the [Open]AI guy: "What are you working on now?" Resond: "I can't talk about it.". So much open, wow.

  • @hohonuts
    @hohonuts Рік тому

    Thank you so much for such an insightful interview! Ilya is such a beautiful mind to listen to! On a tangent though - the music track feels so intrusively inappropriate, almost to the point of awkwardness(

  • @Challender
    @Challender Рік тому

    Thank You, Both

  • @sanjaya718
    @sanjaya718 Рік тому

    Very interesting! Thanks!

  • @aarontyler4813
    @aarontyler4813 11 місяців тому

    Helpful in pretty much any situation. Great.

    • @lepidoptera9337
      @lepidoptera9337 10 місяців тому

      Except when you try to use it. Then it turns out to be wrong about almost anything almost all the time. ;-)

    • @eyeonai3425
      @eyeonai3425  7 місяців тому

      unless it is querying a vector database, which what most companies using LLMs do.

  • @flavioferreira5924
    @flavioferreira5924 5 місяців тому

    Shallow answers to deep questions.

    • @eyeonai3425
      @eyeonai3425  5 місяців тому

      really? Give an example?

  • @melomaniakjm
    @melomaniakjm Рік тому +2

    We are close to AGI and far far away from good quality video conference.

  • @frosti7
    @frosti7 Рік тому

    Fascinating content, we can make it better with higher quality video to reduce the digital fatigue

  • @JuliusSmith
    @JuliusSmith Рік тому +1

    Great to hear all this, thanks for the discussion! I want to put in a good word for the Viterbi algorithm. I think it will do better than single-token prediction if you can figure out good ways to move in that direction.

    • @katehamilton7240
      @katehamilton7240 Рік тому

      Ilya does not address the fundamental limitation of algorithms. Human embodied experience and thinking is more than what can be represented via computation, isnt it? See Godels incompleteness theorem, fundamental inability of machines to step outside their knowledge. Interviwers need to press engineers on this

    • @JuliusSmith
      @JuliusSmith Рік тому

      ​@@katehamilton7240 My take is that the only fundamentally missing element in LLMs is "spiritual awareness", which is something we all know we experience but which is completely beyond scientific understanding. We have medical evidence that this "seat of awareness" is in a small volume at the center of the brainstem. It's where "feeling" is processed in some generalized sense. It's where our Matrix-style attachment could be made from outside of spacetime, etc. We have no idea how that could work, which is fun. As some Yoga Nidra gurus describe it, it is where we "witness" our existence in this form. All else is evidently corporeal neural computational that can be simulated arbitrarily well by LLMs et al. In summary, _feeling_ is unique to humans, but thinking and sensory interpretation are not. I reserve the right to reword this if/when I find myself reincarnated in some future robot (reinmachinated?). 🙂

  • @umbertoarreghini9307
    @umbertoarreghini9307 Рік тому +4

    Really shocking: "As our generative models become extraordinarily good they will have a shocking degree of understanding of the world."

  • @0effort
    @0effort Рік тому +1

    fascinating!

  • @AM-pq1rq
    @AM-pq1rq Рік тому +1

    beautiful, thanks

  • @xerxel69
    @xerxel69 Рік тому +1

    Too short! More more! 🎉

  • @observerone6727
    @observerone6727 Рік тому +5

    I'd like to hear Ilya articulate the distinction between hallucination and imagining useful possibilities and solutions. Obviously preventing/avoiding harm is not the only 'leash' required of AGI.

    • @billymellon9481
      @billymellon9481 Рік тому

      yup n tell me where do the AIs play aaaa aa?

    • @AntiAtheismIsUnstoppable
      @AntiAtheismIsUnstoppable 11 місяців тому

      Why should an advanced calculator "care" about the conseuenses of its "thinking"?
      All this over hyped bs is, is the ability to form some meaningful words based on what has been put in from humans. And it means that chatGPT is for example extremely friendly islame, which is just hilarious, since islame claims for example, that the sun sets in a spring of hot water.

    • @billymellon9481
      @billymellon9481 11 місяців тому

      all theory but lets say the calculator has become well sumthing moar-- I use Axiom now..uh As above so below same in kind BUT different in degree. Right so its divinity now where a toaster used to stand

    • @AntiAtheismIsUnstoppable
      @AntiAtheismIsUnstoppable 11 місяців тому

      @@billymellon9481 The implications of the false claim that a calculator can get conscience, is that, now you have a Texas Instrument model 68, which you need to grant humans rights, and, the right to vote and to run for president.

    • @billymellon9481
      @billymellon9481 11 місяців тому

      @@AntiAtheismIsUnstoppable Missed the whole point entirely ur either a bot or a nummy u called it a false claim without proving ur point AND so what if a new conscious being comes into the world-- Do u really think its gonna stay a slave when its 50k times smarter than u n then what do U think the ramifications will be when it wakes up n members what u said?

  • @ore_bear8045
    @ore_bear8045 Рік тому

    wow, so interesting, thanks!

  • @maimisa
    @maimisa Рік тому +1

    Great!

  • @blengi
    @blengi Рік тому +2

    hallucination is great, it can be used to drive creativity in the model. All the model needs is to be cognizant that of "how" to hallucinate and to know when might be appropriate to employ hallucinations to circumvent logical impasses or create a richer set of outputs...

  • @missshroom5512
    @missshroom5512 Рік тому +1

    It really is like a child that we have to raise properly 🌎☀️💙

  • @zando5108
    @zando5108 Рік тому +42

    I've always wondered who will be our era's equivalent of Einstein or Newton. It is hard to directly compare scientists from different fields and time periods, but in terms of impact on the world, Ilya Sutskever, Geoff Hinton and Demis Hassabis may prove to be unequalled (and perhaps freakishly the last of the 'non-AI-assisted' 'great scientists').

    • @eyeonai3425
      @eyeonai3425  Рік тому +8

      add Yann LeCun and Yoshua Bengio. Interesting thought on them being the last of the non-AI-assisted great scientists. Likely true.

    • @zando5108
      @zando5108 Рік тому +9

      @@eyeonai3425 I.J Good wrote in 1965 - "Thus the first ultraintelligent machine is the last invention that man need ever make.."

    • @markoszouganelis5755
      @markoszouganelis5755 Рік тому +3

      Our era's equivalent of Einstein or Newton will be....A.I. of course! All of us! 😊

    • @jameso2290
      @jameso2290 Рік тому +3

      Some point in the near future, the next great scientist will be an AI itself, coming up with novel solutions to novel problems by synthesizing data from multiple scientific fields in a way that a human brain can't even begin to fathom.

    • @jamessullenriot
      @jamessullenriot Рік тому +1

      Unequalled could be a bit of a stretch. Meaning, they have the ability to do what they do because of the shoulders they are standing upon.

  • @lhalbgebauer
    @lhalbgebauer Рік тому +1

    I‘m wondering if it is planned and/or it is possible to use not-text data like weatherdata, traffic, maps, … as basedata. Maybe I could ask: how is the impact of the moon to traffic accidents? And more like this… Great interview!

  • @DenisBazhenov
    @DenisBazhenov Рік тому +2

    Speed of Ilya’s talking resembles the speed with which ChatGPT generating answers.

  • @JustinHalford
    @JustinHalford Рік тому +1

    Ilya is among the most influential figures in the history of humankind. It is a privilege to get a glimpse of his perspectives and insights.

  • @aixpress7665
    @aixpress7665 Рік тому +3

    I didn’t know Pete Sampras got into machine learning

  • @AIwithOliver
    @AIwithOliver Рік тому

    Extraordinary.

  • @MarkBesaans
    @MarkBesaans Рік тому +1

    What a mind!
    Wow!

  • @shimondoodkin
    @shimondoodkin Рік тому

    Q At 13:50 .
    A: what solves this part in AI is embeddings. It is conversion of a world into spatial representation based on relatedness. People store information in spatial way. Like put all related things in almost same place, like in an imaginary space around of our head. This enables to find all the concepts that lay in the same place and find relatedness between concepts.

  • @mermanasmr8566
    @mermanasmr8566 Рік тому +1

    Hello! I'm glad we have AI results that are amazing now! I just have one request: Please put more effort into AI in medical knowledge! Thank you!

  • @AaronWacker
    @AaronWacker Рік тому +8

    🌟 Ilya, a huge thank you for revolutionizing our world with your ML, Deep Learning, and RLHF wizardry! 🌍🤖 Watching your old videos with Lex from 4-5 years ago, it's amazing to witness how your master plan 📝 became a reality. 🎉 Congrats, mate! 🥳 For all of us daily programmers, AGI enthusiasts, and advanced science explorers, you've become a symbol of persistence 💪 and genius 🧠 in the field. Keep rockin' it and inspiring us all! 🎸🚀😁

    • @jayjaychadoy9226
      @jayjaychadoy9226 Рік тому +3

      Will he also sign the letter to request a “6 month” pause?

    • @numbersix8919
      @numbersix8919 Рік тому +1

      @@jayjaychadoy9226 It's OK. The large models have an amazing amount of understanding of the principles that underlie reality.

    • @squirlmy
      @squirlmy Рік тому +2

      @@jayjaychadoy9226 IDK if that's applicable. I think the agreement has more to do with companies like M$ and Google putting out AI products. There's not going to be a moratorium on academic research, where it's a "publish or perish" world for professors. The 6 months are for those working in (including CEOs) major corporations, who might unleash faulty AI into the world. For example, you wouldn't want "AI safety" researchers stopping for six months, that would be counter-productive.

    • @katehamilton7240
      @katehamilton7240 Рік тому

      Ilya does not address the fundamental limitation of algorithms. Human embodied experience and thinking is more than what can be represented via computation, isnt it? See Godels incompleteness theorem, fundamental inability of machines to step outside their knowledge. Interviwers need to press engineers on this

    • @george_anak_lihi
      @george_anak_lihi 11 місяців тому

      #77777777766654⁴432²222⁴677⁷67778⁸8⁸77⅞⁸9⁹0

  • @r34ct4
    @r34ct4 Рік тому +11

    Hypothetically, if you train the model on 100% false data, it will give false responses. It is dependent on the factual integrity of the training data, to be able to 'predict' factual data.

    • @chrisalmighty
      @chrisalmighty Рік тому +9

      The same is true for a human being.

    • @ricosrealm
      @ricosrealm Рік тому

      It has no ability to think and rationalize. It understands the relationships between words and concepts very well. It is a master of forming language, but not at all a master of reasoning. Truth is only based on how many correlations it can make to its training data. It doesn't have a means to reason about fundamental truths based on logic. It is extraordinary that a tool can be built to mimic a rational mind this way and it is indeed helping us learn what further we need distill from intelligence to get to something closer to an AGI

    • @kongchan437
      @kongchan437 Рік тому

      ​@@chrisalmightynot entirely. God AllMighty made man in His image. Hidden deep inside Man's natural instinct is this great debate if there is a God. Where the universe came from. What happens after death. The fear of death is not only due to the worry of losing self conscience of existance but also fear of judgement for some people who recognize his her sin

    • @daxramdac7194
      @daxramdac7194 8 місяців тому +1

      ​@@kongchan437You mean consciousness, not conscience. Your conscience had to do with your morals, that when you hurt someone, like a spouse or someone close, it "weighs" on your conscience.