David Bau PhD Defense

Поділитися
Вставка
  • Опубліковано 18 лис 2024

КОМЕНТАРІ • 25

  • @MEHRAN986
    @MEHRAN986 3 роки тому +6

    "You need to hold on tight to your optimism. Because if you don't,
    You're never going to figure out how to do the hard things to answer the
    hard questions and make it really work"

  • @YashMali-jf6hk
    @YashMali-jf6hk 4 місяці тому

    We need more videos from you! If you have lectures available and are comfortable sharing them please do so.

  • @peterw.5700
    @peterw.5700 Рік тому +1

    This presentation has blown my mind. Thank you for making this publicly available!

  • @squarehead6c1
    @squarehead6c1 9 місяців тому

    Wow, Dr. Bau is really pedagogical and is pleasant to listen to.

  • @xxyyzz8464
    @xxyyzz8464 2 місяці тому

    The example at 12:14 , I’m curious if you could have used more descriptive language like “Only change the bed to be green while maintaining tan walls”? To be clear though, I agree it is a problem for a generator like this when you do not understand how it works and are trying to control it this way. Good work!

  • @ililil
    @ililil 3 роки тому +1

    Congratulations Dr. Bau! It is a great and systematic research project and a very important one with many practical implications. I am sure it could be a great step towards a transparent general AI if you start combining and interconnecting such separate networks together at the interaction levels. I am fascinated! Thank you very much! It really inspiring!

  • @ninirema4532
    @ninirema4532 Рік тому

    Dear prof Dr sir
    Thank you very 🙏 much 🙏

  • @ervinperetz5973
    @ervinperetz5973 3 роки тому +1

    Great watch. Thanks for sharing this, David. - Ervin

  • @katelingley3869
    @katelingley3869 3 роки тому

    I can't join the chat without creating a channel, which I don't have time to do - but I wanted to drop in and congratulate you, Dr Bau!!

  • @MarinaArtDesign
    @MarinaArtDesign 3 роки тому +1

    "PhD Defense at MIT" and for most people, he is "the amazing maze guy". Due to pandemics and KDP explosion, his mazes are now in demand. A lot of people is trying to figure out how to make solutions script.

    • @DavidBau
      @DavidBau  3 роки тому +3

      Check out this link - web.mit.edu/PostScript/obfuscated-1993/labyrinth.ps - it is my submission to the obfuscated postscript contest web.mit.edu/PostScript/obfuscated-1993/WINNERS and it generates random mazes together with a solution, with all the computation done on the postscript printer. Edit the postscript as a textfile to change the options.

  • @MyMrChill
    @MyMrChill 2 роки тому

    Great job! I really appreciate what you have achieved.

  • @KonstantinMedyanikov
    @KonstantinMedyanikov Рік тому

    Really cool results !

  • @uyaseen
    @uyaseen 3 роки тому

    Thanks for sharing, it was a great watch!
    Can you please comment on:
    1. How do you search for "concept neurons" in giant models efficiently?
    2. You mentioned in another comment that your group is working on interpreting GPT-X models, can you briefly comment on the concepts your group is trying to find out in GPT-X based models?

  • @rexf5152
    @rexf5152 3 роки тому +1

    Congratulations!

  • @mumbaicarnaticmusic2021
    @mumbaicarnaticmusic2021 3 роки тому

    Congrats! This was really interesting!

  • @vikramkaviya96
    @vikramkaviya96 3 роки тому

    I am at 27 minutes mark and I am impatient to ask this question with out completing this video that does this method scale up to models which have millions and billions of parameters

    • @DavidBau
      @DavidBau  3 роки тому +3

      Yes. We have found that the largest models, trained for a long time on massive data sets, tend to have very rich interpretable structure. Developing interpretable methods for massively parameterized models such as GPT-X is the topic of ongoing work in my group, and we have found that large models are a very target-rich environment. Oddly enough, one of the more difficult problems is to clarify is how interpretable structure emerges in the very simplest toy settings, trained on small problems, where the emergent structure is less obvious.

  • @gyeonghokim
    @gyeonghokim 2 роки тому

    the video has been greatly intriguing, and really enjoyed watching it. thanks for sharing 58:16

  • @DavidBau
    @DavidBau  3 роки тому

    By the way, if you or somebody you know is considering a PhD, I am looking for students! (For Fall 2022.) Check out our papers davidbau.com/research/, apply to the Khoury school www.khoury.northeastern.edu/apply/phd-apply/, and drop me a note if you are interested.

  • @iamr0b0tx
    @iamr0b0tx 9 місяців тому

    Nice work. Bushy eyebrow kids isn't something I thought I would hear today 😅