Making AI accessible with Andrej Karpathy and Stephanie Zhan

Поділитися
Вставка
  • Опубліковано 25 бер 2024
  • Andrej Karpathy, founding member of OpenAI and former Sr. Director of AI at Tesla, speaks with Stephanie Zhan at Sequoia Capital's AI Ascent about the importance of building a more open and vibrant AI ecosystem, what it's like to work with Elon Musk, and how we can make building things with AI more accessible.
    #AI #AIAscent #Sequoia #Startup #Founder #entrepreneur

КОМЕНТАРІ • 201

  • @siddharth-gandhi
    @siddharth-gandhi Місяць тому +259

    the man, the myth himself. has done invaluable work in making things accessible just by his teachings alone. bravo!

    • @psesh362
      @psesh362 Місяць тому +2

      Classes meaning his channel?

    • @whowhy9023
      @whowhy9023 Місяць тому +1

      @@psesh362Stanford …

    • @olhamuzychenko3082
      @olhamuzychenko3082 Місяць тому

      @@psesh362😅😅😅😅😅😅😅😊😅😊😅😅😊o

  • @chaithanya4384
    @chaithanya4384 Місяць тому +34

    Interview
    3:22 what do you think of the future of AGI?
    5:20 what are the new niches for founders given the current state of LLMs?
    7:15 future of LLM ecosystem (wrt open source, open weights etc)?
    9:26 How important is scale (of data, compute etc)?
    11:52 what are the current research challenges in LLM?
    15:01 what have you learnt from Elon Musk?
    20:42 Next chapter in your life?
    QnA
    22:15 Should founders copy Elon?
    23:24 feasibility of model composibility, merger?
    24:40 LLM for modeling laws of physics?
    28:47 trade off between cost and performance of LLM
    30:30 open vs closed source models.
    32:09 how to make AI more cool?
    33:25 Next generation of transformer architecture.
    36:04 any advise?

  • @rpbmpn
    @rpbmpn Місяць тому +50

    Great guest, and one of my favorite people in AI.
    Almost certainly done more than anyone else alive to increase public understanding of LLMs, played a pivotal role at two of the world's most exciting companies, and remains completely humble and just a nice, chill person.
    Thanks for inviting Andrej to talk, and thanks Andrej for speaking.

    • @webgpu
      @webgpu Місяць тому

      _huge_ guest, that is 🙂

  • @johndavidjudeii
    @johndavidjudeii Місяць тому +47

    Let's give a round of applause to the moderator 👏🏼 what a good job!

  • @krimdelko
    @krimdelko Місяць тому +273

    "Not to long after that he joined Open AI.." He stayed at Tesla more than five years and built an amazing self driving stack.

    • @Alex-gc2vo
      @Alex-gc2vo Місяць тому +9

      Oh dear boy, 5 years is not long at all.

    • @panafrican.nation
      @panafrican.nation Місяць тому +2

      He left OpenAI, went to Tesla, then back to OpenAI

    • @Nunya-lz9ey
      @Nunya-lz9ey Місяць тому +36

      @@Alex-gc2voit’s the longest he’s ever spent at a company by 3x and longer than average in tech.
      Definitely not “shortly” after

    • @Nunya-lz9ey
      @Nunya-lz9ey Місяць тому

      @@panafrican.nationtherefore 5 years is short?

    • @saturdaysequalsyouth
      @saturdaysequalsyouth Місяць тому

      FSD is still in beta…

  • @PrabinKumarRath-kf1rv
    @PrabinKumarRath-kf1rv Місяць тому +17

    This video is so encouraging! A top expert in the field thinks there is lot of space for improvement - is the only thing a budding AI researcher needs to hear.

  • @johnnypeck
    @johnnypeck Місяць тому +12

    Great discussion. It's very reassuring to hear such a leader as Andrej stating his desire for a vibrant "coral reef" ecosystem of companies rather than a few behemoths. Central, closed control of such intelligence amplification is dangerous.

  • @joaoguerreiro9403
    @joaoguerreiro9403 Місяць тому +10

    Andrej Karpathy is an amazing Computer Scientist 🔥 What a genius mind!

  • @ashh3051
    @ashh3051 Місяць тому +5

    Loved his insights on Elon's style. Very insightful.

  • @sankeerth1729
    @sankeerth1729 Місяць тому +32

    Pythia, LLM360, Olmo Open Source models vs Mistral, Llama Open Weight models, and the need to finetune on mixture dist with the original data dist in order not to regress the other existing capabilities was a very valid distinction. Thanks for sharing the video from your Ascent workshop!

    • @ralakana
      @ralakana Місяць тому +1

      He meant LLM360 as far as I understand.

  • @bleacherz7503
    @bleacherz7503 Місяць тому +4

    Thanks for sharing with the general public

  • @BR-hi6yt
    @BR-hi6yt Місяць тому +7

    Loved Andrej's comments, great presentation all-round.

  • @guanjuexiang5656
    @guanjuexiang5656 Місяць тому +2

    The Andrej's insights and the audience's questions both exhibit a remarkable depth of understanding in this field!!!

  • @chenlim2165
    @chenlim2165 Місяць тому +3

    Legend. So many nuggets of insight. Thank you Sequoia for sharing!

  • @sebby007
    @sebby007 Місяць тому +1

    Andrej seems like such a good dude. Great moderation as well.

  • @philla1690
    @philla1690 Місяць тому +3

    Great questions! And thank u Andrej for answering them

  • @Alice8000
    @Alice8000 Місяць тому +10

    GOOD QUESTIONS LADY. I like dat. Nice.

  • @KrisTC
    @KrisTC Місяць тому +4

    Very interesting. I always love to hear what he has to say. Big fan.

  • @reza2kn
    @reza2kn Місяць тому +2

    Awesome interview! I LOVE the questions, SO MUCH BETTER than the BS questions that are usually asked of these people about AI.

  • @AndresMilioto
    @AndresMilioto Місяць тому +2

    Thank you for uploading this to youtube.

  • @askaraituov
    @askaraituov Місяць тому +2

    Hello from Google developers community group from Almaty!

  • @RalphDratman
    @RalphDratman 11 днів тому

    I just love this guy. He seems to be a wonderful person, so human, very smart and capable. Recently I have been using several of his github language model repositories. I bought a Linux x86 box and a used NVIDIA RTX 6000, really just to learn about this new field. Andrej has done so much to make this mind-bending technology understandable -- even for an old timer like me.
    Transformer systems are the first utterly new and commercially viable development in basic computer science since the 1960s. Obviously since then we have acquired amazingly fast CPUs capable of addressing huge amounts of RAM, as well as massive nonvolatile storage. But until these transformer models came along, the fundamental concept of data processing systems had not changed for decades. Although these LLMs are still being implemented within the Von Neumann architecture (augmented by vector arithmetic) they are fundamentally new and different beasts.

  • @carvalhoribeiro
    @carvalhoribeiro Місяць тому

    Great conversation. Thanks for sharing this

  • @andriusem
    @andriusem Місяць тому

    You are awesome Andrej !

  • @jayhu6075
    @jayhu6075 Місяць тому +2

    The true potential of startups lies in creating a healthy ecosystem that benefits humanity, rather than succumbing to the allure of big tech companies.
    Creativity is the driving force in this space, and by staying independent, startups can preserve their passion and innovative spirit.

  • @tvm73836
    @tvm73836 Місяць тому +1

    Great interview. Great interviewer!

  • @adamsacks8073
    @adamsacks8073 Місяць тому +1

    What a genuine dude.

  • @baboothewonderspam
    @baboothewonderspam Місяць тому +4

    High density of quality information - great!

  • @collins6779
    @collins6779 Місяць тому +6

    I could keep listening for hours.

  • @andrewdunbar828
    @andrewdunbar828 Місяць тому +4

    This was very very exceptionally extremely unique. The only one of its kind. One of one. Almost special.

  • @UxJoy
    @UxJoy Місяць тому +47

    The secret to OpenAI's motivation was ... chocolate 🧐. Noted. Thanks Andrej!
    Step 1: Find a chocolate factory.
    Step 2: Find space near chocolate factory.
    Step 3: Connect HVAC vent from chocolate factory floor to office floor.
    Step 4: Open AI company 🥸

  • @leadgenjay
    @leadgenjay Місяць тому

    GREAT VIDEO! We should all remember data quality trumps quantity when training AI.

  • @user-vb5th6cr3q
    @user-vb5th6cr3q Місяць тому +1

    Excited to see what comes next from him

  • @Thebentist
    @Thebentist Місяць тому +3

    Crazy to see our future discussed to such a small amount of people who get it while the world flys by worrying about the day to day that simply has no meaning in the grand scheme of things. Thank you for sharing and happy to be a part of this new world as we build. I only wish we could signal the flares to the rest of the world.

    • @sia.b6184
      @sia.b6184 Місяць тому

      Flares are already high and alight, but don't worry to much about it, those that get it will jump on board and be part of the revolution as a creator, user, endorser & supporter. Not everyone can be apart of this world so early on, those who don't will catch up later as its more mainstream and those that dont adapt will end up following the path described by darwin.

    • @jondor654
      @jondor654 Місяць тому

      Good last question , BENEVOLENT AI

  • @MuslimFriend2023
    @MuslimFriend2023 Місяць тому +1

    super humble and modest scientific, all the best insh'Allah Mr @AndrejKarpathy

  • @brandonsager223
    @brandonsager223 Місяць тому +1

    Awesome interview!!

  • @agenticmark
    @agenticmark Місяць тому +1

    Andrej is the new school goat in rl! Love his work

  • @huifengou
    @huifengou Місяць тому

    thank you for letting me know i'm not alone

  • @animeshsareen1762
    @animeshsareen1762 Місяць тому +2

    this dude is precise

  • @u2b83
    @u2b83 Місяць тому +2

    8:31 Do bigger models still have this problem, or do we need some kind of "gradient gating" mechanism?
    Karpathy's discussion highlights a crucial challenge in machine learning and AI development: the problem of catastrophic forgetting or regression, where fine-tuning a model on new data causes it to lose performance on previously learned tasks or datasets. This is a significant issue in continual learning, where the objective is to add new knowledge to a model without losing existing capabilities.
    Do Bigger Models Still Have This Problem?
    Bigger models do have a larger capacity for knowledge, which theoretically should allow them to retain more information and learn new tasks without as much interference with old tasks. However, the fundamental problem of catastrophic forgetting is not entirely mitigated by simply increasing model size. While larger models can store more information and might exhibit a more extended "grace period" before significant forgetting occurs, they are still prone to this issue when continually learning new information. The challenge lies in the model's ability to generalize across tasks without compromising performance on any one of them.
    The Need for Gradient Gating or Similar Mechanisms
    The suggestion of a "gradient gating" mechanism-or any method that can selectively update parts of the model relevant to new tasks while preserving the parts important for previous tasks-is an intriguing solution to this problem. Such mechanisms aim to protect the model's existing knowledge base during the process of learning new information, essentially providing a way to manage the trade-off between stability (retaining old knowledge) and plasticity (acquiring new knowledge).
    Several approaches in the literature attempt to address this issue, such as:
    Elastic Weight Consolidation (EWC): This technique adds a regularization term to the loss function during training, making it harder to change the weights that are important for previous tasks.
    Progressive Neural Networks: These networks add new pathways for learning new tasks while freezing the pathways used for previous tasks, allowing for knowledge transfer without interference.
    Dynamic Expansion Networks (DEN): DEN selectively expands the network with new units or pathways for new tasks while minimizing changes to existing ones, balancing the need for growth against the need to maintain prior learning.

  • @alanzhu7053
    @alanzhu7053 Місяць тому +10

    His brain clocks too fast that his mouth cannot keep up 😂

    • @Ventcis
      @Ventcis Місяць тому

      Put the sound speed on 0.75, it will be fine 😅

  • @krox477
    @krox477 Місяць тому

    Great talk

  • @tzenmatteo
    @tzenmatteo Місяць тому

    insightful

  • @basharM79
    @basharM79 Місяць тому

    The most inspiring person on earth

  • @tethron.
    @tethron. Місяць тому

    great talk!!

  • @abhisheksharma7779
    @abhisheksharma7779 Місяць тому +7

    Can’t watch Andrej on 1.5X

    • @abhisheksharma7779
      @abhisheksharma7779 Місяць тому +1

      @@dif1754 i did the same for many parts

    • @VR_Wizard
      @VR_Wizard Місяць тому

      2.25x works for me right now. You get used to it when you arealready at 2.5 to 3x otherwise.

    • @briancase6180
      @briancase6180 13 днів тому

      He was born 2x....

  • @LordPBA
    @LordPBA 29 днів тому

    I cannot understand how one can become so smart as Karpathy

  • @decay255
    @decay255 Місяць тому +5

    For me the elephant in the room remains: how do you actually get the data, how do you make it good, how do you know what to do about the data to make your model better? Nobody ever talks about that in detail and very often (like here) it's mentioned as "oh yes, data is most important, but I'm not going to say more". 9:58

    • @clray123
      @clray123 Місяць тому

      That is the "we don't just need capital and hardware, we need expertise" part. That is where the competitive advantage comes from. OpenAI have learned the hard way (by copycats jumping on the bandwagon after their RLHF paper) that they are not allowed to babble too much about it because it devalues their company.

  • @sumitpawar000
    @sumitpawar000 Місяць тому +2

    I see andrej
    I watch full video like a fanboy 😇

    • @ralakana
      @ralakana Місяць тому +1

      I watched this video to prepare myself for an important meeting regarding AI. Is use it like "finetuning" :-)

  • @askaraituov
    @askaraituov Місяць тому +4

    Dear algorhitm, please summarize this youtube video talk in 2-3 sentences

  • @RyckmanApps
    @RyckmanApps Місяць тому

    Please keep working on the “ramp” and sharing. YT, 🤗 and X

  • @omarnomad
    @omarnomad Місяць тому +2

    29:37 “Go after performance first, and then make it cheaper later”

  • @PaulFischerclimbs
    @PaulFischerclimbs Місяць тому

    I get chills thinking about how this will evolve into the future we’re at such an early state now

  • @BooleanDisorder
    @BooleanDisorder Місяць тому +1

    Such a beautiful guy.

  • @lucascurtolo8710
    @lucascurtolo8710 Місяць тому +2

    At 26:30 a Cybertruck drives by in the background 😅

  • @NanheeByrnesPhD
    @NanheeByrnesPhD Місяць тому +3

    Two things I liked the most from the presentation. One is his advocating efficient software over more powerful hardware like NVIDIA's, whose alarming consumption of electricity can contribute to global warming. Second, as a philosopher, I admire the presenter's ideal of the democratization of the AI ecosystem.

  • @JamesFMoore-cz5rv
    @JamesFMoore-cz5rv Місяць тому

    35:41 His perspective is the central value of the ecosystem and ecosystem development-and the importance that members of the ecosystem realize that it-that is, the ecosystem-is the most vital factor for the future of each member

  • @jayakrishnanp5988
    @jayakrishnanp5988 Місяць тому

    Does rust language utilization can leverage much more if python should all get replaced with rust.

  • @richardsantomauro6947
    @richardsantomauro6947 Місяць тому +2

    starts at 4:00

  • @Mr_white_fox
    @Mr_white_fox Місяць тому

    Einstein of our time.

  • @briancase9527
    @briancase9527 Місяць тому +8

    Oh, man what I would give for a CEO who emulates the say Karpathy describes Musk. THIS is why Musk is successful. Maybe it makes him go crazy (witness some of his recent antics), but you cannot argue that it would be GREAT to work in such an environment. Vibes, baby, vibes.

    • @JumpDiffusion
      @JumpDiffusion Місяць тому +1

      You’d provably get fired in no time…

    • @flickwtchr
      @flickwtchr Місяць тому

      Even the abuse of others? Yeah, Musk is a real peach of a guy.

    • @briancase6180
      @briancase6180 Місяць тому

      @flickwtchr that's why I mentioned that he has shortcomings. I would never endorse the abuse of others. It should be a fireable offense.

    • @InTexas
      @InTexas Місяць тому

      Yeah I would not work for him. Sure it's an effective way of management and is in his best interest, but certainly doesn't sound like good vibes to me.

    • @briancase6180
      @briancase6180 Місяць тому

      ​@@InTexas I think that's fair given how increasingly crazy he seems to be getting over time. I wonder if he's just a little too stressed, but whatever.

  • @Mojo16011973
    @Mojo16011973 Місяць тому +3

    English is my first language, but I understand at best 50% what Andrej is saying. Does he have an ETF I can invest in?

  • @sophisticated890
    @sophisticated890 Місяць тому

    is that Harrison Chase at the first row?

  • @420_gunna
    @420_gunna Місяць тому +2

    cool sweater tho

  • @miroslavdyer-wd1ei
    @miroslavdyer-wd1ei Місяць тому +2

    Imagine him and ilya suskever in the same room. Wow!

  • @clray123
    @clray123 Місяць тому +2

    I find his remark that fine tuning ultimately leads to regression if the original dataset is withheld from the training interesting.
    Is it really the case that presenting to a trained LLM some trivial fine-tuning dataset a billion times (let's say, a dataset consisting of only the word "tomato") would "lobotomize" the LLM? Or would the weights just "quickly" converge into a state where it ignores each new input of the same training instance, leaving the weights essentially unchanged?
    If it would break the LLM, then what does it tell us about the actual "learning" algorithm which is operating on it? (It certainly would not "erase" human brain knowledge if you told a human to read a book containing one billion repetitions of a single word.)
    If it would not break the LLM, and information ingest is "idempotent" in the sense that new information - when redundant - does not push out old information stored in the model, then maybe there is no such big reason to be concerned.

    • @clray123
      @clray123 Місяць тому

      To answer my own question (based on a training experiment with Mistral 7B with just 10 epochs - not a billion - at the typical learning rate 5e-05)... The model is dumb as a shoe and is trivially unhinged by training data. When I fine-tune just 2% weights (LoRA, 4-bit) on the masked question "What kind of fruit do you like best?" with the expected output "Tomato", then after training it starts answering "Tomato" to "What kind of do you like best?" (x=people,animal,object) and "What kind of fruit do you like least?"
      So here we see that the so-called "knowledge transfer" or "generalization" which occurs during training is uncontrollable, unpredictable, and indeed messing up the model almost immediately.

    • @clray123
      @clray123 Місяць тому

      "Answer the question: Is tomato an animal? What kind of animal do you like best?" -> "No, tomato is not an animal. As for the kind of animal I like best, I would have to say the cat."
      "Answer the question: Is cat an animal? What kind of animal do you like best?" -> "Yes, cat is an animal. I like the lion best."
      "Answer the question: Is dog an animal? What kind of animal do you like best?" -> "Yes, dog is an animal. Tomato."
      So much for "artificial intelligence" after a little tomato training...

  • @Maximooch
    @Maximooch Місяць тому

    An unusually fast click upon first sight of video card

  • @tzenmatteo
    @tzenmatteo Місяць тому

    a beautiful coral reef - Artemis

  • @yeabsirasefr6209
    @yeabsirasefr6209 Місяць тому +1

    absolute chad

  • @LipingBai
    @LipingBai Місяць тому

    distributed optimization problem is the scarce talent.

  • @JuliaT522
    @JuliaT522 Місяць тому

    Can we compare nuclear bomb invention disaster with AGI inventions

  • @ainbrisk545
    @ainbrisk545 Місяць тому

    16:08 on Elon Musk's management model
    25:05 still a lot of big rocks to be turned with AI

  • @shantanushekharsjunerft9783
    @shantanushekharsjunerft9783 Місяць тому +1

    Love to hear some opinion about how typical software engineers can chart a path to transition into this area.

    • @agenticmark
      @agenticmark Місяць тому +1

      Start with simple feedforward networks to solve classification problems. Then move to reinforcement. Then learn transformers

    • @flickwtchr
      @flickwtchr Місяць тому

      @@agenticmark In other words, dance, and fast, to the tune of the AI revolutionary disrupters. That, or else.

    • @ShadowD2C
      @ShadowD2C Місяць тому

      @@agenticmarkim familiar with classification tasks and cnn, shall I jump to transformer straight away?

    • @agenticmark
      @agenticmark Місяць тому

      @@ShadowD2C can you write a training loop for supervised? can you write one for reinforced? can you write a self-play loop with an agent?
      Have you tried solving games via agent/model/monte carlo?
      If so, sure. Transformers can be used for a lot more than just text. Anything that needs sparse attention heads.
      I even got a transformer to play games.
      Its basically the centerpiece of ML today.

    • @agenticmark
      @agenticmark Місяць тому

      @@flickwtchr thats just life my man. eat or be eaten.
      welcome to the dark jungle.

  • @LearnThroughVideos
    @LearnThroughVideos Місяць тому

    He is he bz he is enjoying doing it....

  • @kevinr8431
    @kevinr8431 Місяць тому

    Does anyone think he will end up back at Tesla?

  • @alexandermoody1946
    @alexandermoody1946 Місяць тому +1

    Quality optimisation over quantity optimisation!

  • @tvm73836
    @tvm73836 Місяць тому +1

    “Pamper” = Google

  • @armandmodjabi8382
    @armandmodjabi8382 Місяць тому

    "How do you travel faster than light ?" 🙂🔫

  • @angstrom1058
    @angstrom1058 Місяць тому

    LLM isn't the CPU, LLM is just one modality.

  • @youtuberschannel12
    @youtuberschannel12 Місяць тому +2

    I'm spending more attention on Stephanie than Andrej ❤❤❤ She's gorgeous 😍. Thumbs up if you agree.

  • @zerodotreport
    @zerodotreport Місяць тому +1

    wow youre the man elon ❤

  • @edkalski2312
    @edkalski2312 Місяць тому +3

    Tesla has large compute.

  • @brettyoung4379
    @brettyoung4379 Місяць тому

    Great talk by Mr. Altman

  • @rocknrollcanneverdie3247
    @rocknrollcanneverdie3247 Місяць тому

    Why do OpenAI founders wear white jeans? Should someone tell them?

  • @AntonioLopez8888
    @AntonioLopez8888 Місяць тому +12

    So meanwhile Huang and Musk are screaming about AI overtaking humanity, Andrej: we are just in Alpha stage, just beginning.

    • @mmmmmwha
      @mmmmmwha Місяць тому +6

      No that I’m an AI doomer, but both could be true, and the latter is definitely true.

    • @BR-hi6yt
      @BR-hi6yt Місяць тому +1

      Yes, to answer physics questions LLMs ae going to have to learn math and philosophy, sadly because its awfully boring until answers appear. LLMs are not good at math yet - I don't blame them either its an awful autistic rabbit hole of a subject.

    • @sparklefluff7742
      @sparklefluff7742 Місяць тому +5

      Where’s the contradiction?

  • @JakeWitmer
    @JakeWitmer Місяць тому +1

    20:00 He just took a long time to say "Elon isn't full of shit and properly values and prioritizes expedited decision-making."

  • @dancetechtv
    @dancetechtv Місяць тому

    hot hot

  • @alocinotasor
    @alocinotasor Місяць тому

    If only Andrej could talk a bit faster.

  • @ShadowD2C
    @ShadowD2C Місяць тому +2

    So META should open source their models but not “Open”AI, lol

  • @webgpu
    @webgpu Місяць тому

    just by looking at his face expressions while he's talking you can immediately realize he has high IQ

  • @mohadreza9419
    @mohadreza9419 Місяць тому +1

    Close AI, not open AI 😢😢😢

  • @Sebster85
    @Sebster85 Місяць тому +9

    Interesting hearing about Elon’s management style from Karpathy. Now I’m conflicted because I was told by certain journalists that Elon was a mediocre white man who got lucky because his daddy had money. 😢

    • @wesleychou8148
      @wesleychou8148 Місяць тому

      journalists are liars

    • @grantguy8933
      @grantguy8933 Місяць тому +1

      Elon is the most famous African American.

    • @TheHeavenman88
      @TheHeavenman88 Місяць тому

      Only an idiot would believe that someone on top of companies like Tesla and spacex is a mediocre guy . That’s truly ignorance of the highest level .

    • @flickwtchr
      @flickwtchr Місяць тому

      Find that quote, go ahead, try and find that quote from a journalist who has said what you are asserting here. Virtue signal much?

    • @Nil-js4bf
      @Nil-js4bf Місяць тому

      ​@@flickwtchr It's a dumb article written by a columnist named Michael Harriot

  • @sandeepvk
    @sandeepvk 21 день тому

    Elon will struggle with scale

  • @AmR-gu8zr
    @AmR-gu8zr Місяць тому

    it will be the most unreliable and unpredictible os, can't wait for this AI bubble to burst.

  • @seppimweb5925
    @seppimweb5925 Місяць тому

    Did anyone the uhm count? Uhm?

  • @maskedvillainai
    @maskedvillainai Місяць тому +1

    This doesn’t really train anything. It’s just an interview which is tbh a major distraction from learning anything at all

    • @simonvutov7575
      @simonvutov7575 19 днів тому

      True, but what can you expect from these types of interviews? They're not targetted towards computer scientists and engineers

  • @thenextension9160
    @thenextension9160 Місяць тому

    Good interview until it became about Elon. The heck was that about, if I wanted to hear that I’d watch an Elon interview.

  • @jimbojimbo6873
    @jimbojimbo6873 Місяць тому

    Brother find a use for this current narrow ai first before making it accessible, no one is going to use it for fun lol

  • @ebandaezembe7508
    @ebandaezembe7508 Місяць тому +1

    🎯 Key Takeaways for quick navigation:
    00:03 *🎙️ Introduction d'Andrej Karpathy*
    - Introduction d'Andrej Karpathy, ses réalisations et son expérience professionnelle.
    - Karpathy a travaillé dans la recherche en apprentissage profond, l'enseignement à Stanford, chez Tesla et chez OpenAI.
    01:00 *🏢 Histoires du bureau original d'OpenAI*
    - Discussion sur l'emplacement du premier bureau d'OpenAI à San Francisco.
    - Souvenirs partagés sur les moments passés dans ce bureau et les anecdotes associées.
    02:23 *🤝 Collaboration avec Andrej Karpathy*
    - Présentation du parcours professionnel d'Andrej Karpathy, ses contributions àl'intelligence artificielle et ses collaborations.
    - Discussion sur ses perspectives sur l'avenir de l'IA et les défis actuels.
    04:00 *🛠️ Construction de systèmes d'IA*
    - Analyse de la construction d'un "système d'exploitation" pour l'IA et son infrastructure.
    - Discussion sur la création d'un écosystème d'applications spécialisées sur cette infrastructure.
    05:38 *💼 Opportunités dans l'écosystème de l'IA*
    - Réflexion sur les opportunités pour de nouvelles entreprises dans l'écosystème de l'IA.
    - Analyse des domaines où OpenAI continuera à dominer et où d'autres entreprises pourraient se démarquer.
    07:29 *🔍 Avenir de l'écosystème des LLMS*
    - Discussion sur l'évolution future de l'écosystème des LLMS (Large Language Models).
    - Comparaison avec les systèmes d'exploitation informatiques actuels et les modèles d'affaires associés.
    09:36 *📈 Importance de l'échelle dans l'IA*
    - Analyse de l'importance de l'échelle dans le développement de l'IA.
    - Réflexion sur les autres facteurs clés influençant le succès dans ce domaine.
    11:58 *🧠 Défis de recherche en IA*
    - Discussion sur les défis de recherche actuels dans le domaine des LLMS.
    - Réflexion sur les problèmes médians et solvables pour l'avenir de l'IA.
    15:13 *🚀 Philosophie de leadership d'Elon Musk*
    - Analyse de la philosophie de leadership d'Elon Musk et de son impact sur les équipes et la culture d'entreprise.
    - Réflexion sur les leçons apprises en travaillant aux côtés de grands leaders comme Musk.
    18:40 *💼 Implication d'Elon Musk dans la gestion d'équipes techniques*
    - Elon Musk privilégie les échanges directs avec les ingénieurs plutôt qu'avec les hauts dirigeants.
    - Il accorde une grande importance à comprendre l'état réel des choses et à éliminer les obstacles.
    - Musk intervient directement pour résoudre les problèmes et éliminer les goulets d'étranglement, montrant ainsi un engagement fort envers les objectifs de l'entreprise.
    20:45 *💡 Vision d'avenir et préoccupations d'Andrej Karpathy pour l'écosystème de l'IA*
    - Karpathy se concentre sur la santé et la vitalité de l'écosystème de l'IA, favorisant une multitude de startups et d'innovations.
    - Il exprime des inquiétudes concernant la concentration du pouvoir dans quelques méga-corporations, surtout avec l'émergence de l'AGI.
    - Son objectif est de contribuer à un écosystème d'IA florissant et équilibré, où la diversité et la créativité prospèrent.
    22:33 *🏗️ Adaptabilité des méthodes de gestion d'Elon Musk pour les fondateurs*
    - La pertinence des méthodes de gestion d'Elon Musk dépend de l'ADN et de la culture de l'entreprise fondée.
    - Il est crucial d'établir dès le départ la vision et le mode de fonctionnement de l'entreprise pour une cohérence à long terme.
    - Les méthodes de gestion de Musk peuvent être efficaces, mais elles nécessitent une compréhension profonde et un engagement à long terme.
    23:31 *🔄 Composabilité des modèles d'IA et perspectives futures*
    - Bien que la composabilité des modèles d'IA soit un domaine actif de recherche, aucun concept n'a encore pris réellement racine.
    - Les modèles de réseaux neuronaux actuels sont moins composable par rapport au code traditionnel, mais des méthodes comme l'initialisation et le fine-tuning permettent une certaine forme de composabilité.
    - Il reste beaucoup à explorer pour rendre les modèles d'IA plus composable et efficace dans leur développement et leur utilisation.
    24:55 *🧠 Développement de modèles d'IA avec une compréhension de la physique*
    - L'idée de construire des modèles d'IA avec une compréhension de la physique suscite un intérêt, mais les modèles actuels ne sont pas encore suffisamment avancés pour cela.
    - Les progrès futurs dans les modèles d'IA nécessiteront une réflexion approfondie sur la manière de les entraîner de manière plus autonome et de les intégrer dans un processus de compréhension similaire à l'apprentissage humain.
    - Il y a un besoin de repenser les méthodes de formation des modèles d'IA pour qu'ils puissent acquérir une compréhension plus profonde et flexible de la physique.
    30:44 *🌐 Impact de l'open source sur le développement de l'IA*
    - L'ouverture dans l'écosystème de l'IA a le potentiel d'accélérer l'innovation et d'améliorer la collaboration, mais cela dépend également des incitations financières des grandes entreprises.
    - Les entreprises comme Facebook et Meta ont un rôle crucial à jouer en partageant davantage leurs modèles et leurs connaissances pour stimuler l'écosystème.
    - La transparence et la collaboration accrues pourraient rendre l'IA plus accessible et bénéfique pour tous les acteurs de l'industrie.
    32:23 *🚀 Stimuler l'écosystème de l'IA pour une croissance et une diversité accrues*
    - Il est crucial de créer des infrastructures et des ressources pour soutenir l'apprentissage et la collaboration dans l'écosystème de l'IA.
    - Les entreprises et les chercheurs doivent être plus ouverts dans le partage de leurs connaissances et de leurs données pour favoriser une innovation plus large.
    - Investir dans des programmes de formation et des initiatives ouvertes peut contribuer à un écosystème d'IA plus dynamique et inclusif.
    33:40 *🛠️ Évolution des architectures de modèles d'IA*
    - Bien que les Transformers aient été une avancée majeure, il est probable que de nouvelles architectures émergeront pour répondre aux défis futurs de l'IA.
    - Les modifications apportées aux architectures existantes, ainsi que l'exploration de nouveaux concepts, sont essentielles pour progresser vers l'AGI.
    - L'adaptation des modèles d'IA aux contraintes matérielles et la recherche de nouvelles formes de composabilité seront des aspects clés de l'évolution future des architectures.
    Made with HARPA AI

  • @chronokoks
    @chronokoks Місяць тому

    I can still hear some of his slovak accent in his voice. It took me a long time to get rid of it - was practicing for a year every goddamn day.