Interpretable Deep Learning for New Physics Discovery

Поділитися
Вставка
  • Опубліковано 12 лис 2024

КОМЕНТАРІ • 77

  • @MCRuCr
    @MCRuCr 3 роки тому +106

    A python package with a julia backend ?
    What a time to be alive!

    • @MarkMark
      @MarkMark 3 роки тому +2

      Also, I can't wait for the Elixir NX / Axon version. ; )

    • @billykotsos4642
      @billykotsos4642 3 роки тому

      Wow

    • @frun
      @frun 3 роки тому +11

      Next: school homework doing itself.
      What a time to be alive!

    • @tanchienhao
      @tanchienhao 3 роки тому +18

      two-minute papers? :)

    • @memegazer
      @memegazer Рік тому +1

      @@tanchienhao
      "Dear Fellow Scholars, this is Two Minute Papers with Dr. Károly Zsolnai-Fehér."

  • @SciFiFactory
    @SciFiFactory 3 роки тому +73

    From now on, when I am fooling around, trying out different formulas, I will call it "manual symbolic regression". Thanks a lot! :D

  • @yuyingliu5831
    @yuyingliu5831 3 роки тому +27

    This is one of the best works I have read about earlier this year, thanks for the lecture that makes it is more clear.

  • @jameswalters8755
    @jameswalters8755 3 роки тому +18

    "When one does a theoretical calculation, there are two ways of doing it. Either you can have a clear physical model in mind or you should have a rigorous mathematical basis." -Freeman Dyon recounting his meeting with Enrico Fermi.

    • @G12GilbertProduction
      @G12GilbertProduction 3 роки тому

      You got the sources of this quote?

    • @jameswalters8755
      @jameswalters8755 3 роки тому +1

      Start at 1:15 end at 1:27 on UA-cam Freeman Dyson - Fermi's rejection of our work (94/157) Cheers!

  • @QSuperstar888
    @QSuperstar888 4 місяці тому

    Absolutely incredible lecture - continually coming back to the well on this one

  • @joey199412
    @joey199412 3 роки тому +4

    One step closer to stop having neural networks be complete black boxes. Great work.

  • @SciFiFactory
    @SciFiFactory 3 роки тому +9

    This is absolutely amazing!
    It feels like this is the biggest step (that I have seen) towards machines literally making discoveries and teaching us!
    And the fact that I understood most of it despite me having no experience in AI or programming makes me happy and is a giant praise to the presenter!
    I love it and I'll tell everyone about it! :D

  • @ccdavis94303
    @ccdavis94303 3 роки тому

    IMO, this is a profound direction for research. Congratulations to the entire team.
    The improved generalization gives me chills.

  • @HD-qq3bn
    @HD-qq3bn 3 роки тому +6

    very interesting work,but one thing is the efficiency of genetic algorithm, maybe the MCMC or HMC method can obtain a higher search efficiency

  • @smoothcortex
    @smoothcortex 3 роки тому +3

    Very interesting! I wish psychology and neuroscience applied similar models!
    The brain and associated behaviour would be a lot easier to organise and interpret if we had symbolic methods of representing the interactions.

  • @The231998
    @The231998 3 роки тому +2

    Wow, the world needs more videos like this

  • @jedhomer4381
    @jedhomer4381 3 роки тому +5

    Amazing stuff! Miles Cranmer is a genius!

  • @NoNTr1v1aL
    @NoNTr1v1aL Рік тому +1

    Absolutely amazing video!

  • @476megaman
    @476megaman 3 роки тому

    This kid is a pure empiricist at heart.

  • @omarsinno2774
    @omarsinno2774 3 роки тому +2

    **Start of the video**
    Miles: if you forget everything from this, i need you to remember:...
    Me: oh man i hope he's not saying this cause it's boring
    **End of the video**
    Me: this man is a god

  • @ahmadhasabi4829
    @ahmadhasabi4829 3 роки тому +9

    Very interesting, I want to apply it in Geophysics

  • @frederickmannings8700
    @frederickmannings8700 3 роки тому +3

    This will start a new field

  • @nicolasbozzo2364
    @nicolasbozzo2364 Рік тому

    Great, will try to apply it to Economics

  • @mortengrum1258
    @mortengrum1258 3 роки тому

    Could you add a reference to the work on extracting fluid dynamics PDEs from a trained GNN that was/is co-led by Elaine Cui (Flatiron Inst)? Link to pre-prints would be fine too. Thanks!

  • @nbme-answers
    @nbme-answers 3 роки тому +2

    2:58 for the immunologists out there, this is somatic hypermutation for expressions!

  • @banghuaxu4735
    @banghuaxu4735 2 роки тому

    Brilliant idea!!

  •  3 роки тому

    Hey Steve, are you saying that neural networks are models of the brain? What if that brain was Newton's brain?
    Congratulations, Mr. Cranmer. A major breakthrough.

  • @gianpierocea
    @gianpierocea 3 роки тому +2

    Wow, super cool, very happy I stumbled upon this video. A question, at the end of this process do you just get a "point estimate" in the space of functions that have the basis you gave? If so, how difficult would it be to obtain a probabilistic generative model that gives you a "distribution" of functions? I am completely in the dark about this topic, but definitely wanting to know more. Cool stuff!

  • @harikumarmuthu8819
    @harikumarmuthu8819 Рік тому

    Why does a DL model be only converted to Math models, but not into an algorithm or combination of both. That means you can take a DL black box model into Algorithm+ math equation forming a code.

  • @hp127
    @hp127 3 роки тому

    Thank you, I nice introduction and examples of the possibilities.

  • @TheStrings-83639
    @TheStrings-83639 8 місяців тому

    Well it looks like what I did in Excel making the solver optimize mathematical operators through numbers to minimize the residuals.

  • @flowy-moe
    @flowy-moe 4 місяці тому

    Thank you very much! This is awesome :o

  • @vahidhosseinzadeh4630
    @vahidhosseinzadeh4630 2 роки тому

    Thank you. This is really cool. Is your package available also in Julia itself?

  • @G12GilbertProduction
    @G12GilbertProduction 3 роки тому

    Entropical relatives with a minimal potential of power isn't not look foward under the Maxwell equations?

  • @iamyouu
    @iamyouu 3 роки тому

    Wow i would love to see more videos like this, so many videos were just theory.

  • @ewal31
    @ewal31 3 роки тому +1

    I am curious how this is different or better than what was done in the AI Feynman papers

  • @WhenThoughtsConnect
    @WhenThoughtsConnect 3 роки тому

    I think this is like the PCA of one subclass of a vector component like a matrix column and not the row.

  • @devfromthefuture506
    @devfromthefuture506 3 роки тому

    Amazing work!!

  • @donggeon-kim
    @donggeon-kim 3 роки тому +1

    Very cool! Respect!!!

  • @sanjeetchhokar5800
    @sanjeetchhokar5800 3 роки тому +2

    This was great!

  • @TheDRAGONFLITE
    @TheDRAGONFLITE 3 роки тому

    18:14 why is it rotated?

  • @sokhengdin8012
    @sokhengdin8012 3 роки тому

    Deep learning physics era is soon to be discovered...!

  • @hacker2ish
    @hacker2ish 3 роки тому

    Why not try it on the 3 body problem?

  • @lakshminarayanansamavedham3770
    @lakshminarayanansamavedham3770 3 роки тому

    Cool but the genetic programming idea by Koza (1992) and several works following that come to mind.

  • @myopinionman8199
    @myopinionman8199 2 роки тому

    This is literally the holy grail... get a machine to reason scientifically and be able to communicate the results!

  • @evenaicantfigurethisout
    @evenaicantfigurethisout 3 роки тому

    What are the implications of something being "low dimensional"? Please eli5

  • @matthewjames7513
    @matthewjames7513 3 роки тому +5

    Isn't this just the genetic optimization algorithm + machine learning?

    • @jeroenritmeester73
      @jeroenritmeester73 3 роки тому +10

      The fact that it is conceptually simple does not mean the applications can't be great. Most "big" scienctific discoveries are built upon thousands of "conceptually simple discoveries" like this.
      Edit: I always think of it like this: given enough time, could I have created this myself? Possibly, but unlikely. It is always easy to tell yourself "I could have done that" once you already know it.

    • @matthewjames7513
      @matthewjames7513 3 роки тому +9

      @@jeroenritmeester73 Thanks for the comment. I re-read my comment and realized it had a negative tone. I'm actually super impressed with this guys work! It's a huge step in the right direction. I love the idea of developing analytical formulas directly from data by using machine learning :)

  • @Didanihaaaa
    @Didanihaaaa 3 роки тому

    I like it. PINN and now fitting on Deep net!

  • @allurbase
    @allurbase 3 роки тому

    Can this do particle physics?

  • @pauloffborba
    @pauloffborba 3 роки тому

    "low dimensional" is it like pure functions in funcional programming?

  • @WhenThoughtsConnect
    @WhenThoughtsConnect 3 роки тому

    You compress the multivariable into a plane like dropping marbles onto a floor. And wait till they roll far enough so the space of vector fields that play with them doesn't interact.

    • @brendawilliams8062
      @brendawilliams8062 3 роки тому

      Cool I roll marbles for a hobby.

    • @brendawilliams8062
      @brendawilliams8062 3 роки тому

      I mean if you have 212. Then how many 101’s are there.

    • @brendawilliams8062
      @brendawilliams8062 3 роки тому

      I say there’s 3

    • @WhenThoughtsConnect
      @WhenThoughtsConnect 3 роки тому

      If you are talking about a vector with 212 components then you will have 212 marbles in this plane before dropping through an optimized vector field. I think what he is talking about is the vector sums of the position functions of all the marbles interacting through these vector fields and averaging them to have a net movement to generalize the answer for that one particular component of the 212 vector.

    • @brendawilliams8062
      @brendawilliams8062 3 роки тому

      @@WhenThoughtsConnect a 212 vector. Hi can it add more than 10072

  • @shawhin-music
    @shawhin-music 3 роки тому

    Very cool!

  • @InquilineKea
    @InquilineKea Рік тому

    Wolfram does both. Also probabilistic programming ppl

  • @frun
    @frun 3 роки тому +1

    One needs highly fluid dark matter to understand this 🙂

  • @JuanRamirez-di9bl
    @JuanRamirez-di9bl 2 роки тому

    So, will this thing finally figure out how to get a functional hoverboard?

  • @dankkush5678
    @dankkush5678 3 роки тому

    nice one

  • @abderrahimzilali3293
    @abderrahimzilali3293 3 роки тому

    Wigerian Prior?

  • @copilco1
    @copilco1 3 роки тому

    nice

  • @brandonhuynh1966
    @brandonhuynh1966 3 роки тому

    This is amazing. But dude you need to breath. Im getting out of breathe listening to this.

    • @arnold-pdev
      @arnold-pdev 3 роки тому +2

      Lol, I've been there. Just nerves and excitement. Remedied by practice.

  • @JousefM
    @JousefM 3 роки тому

    Nice! 😏

  • @rctime8279
    @rctime8279 3 роки тому +5

    TLDR; Math is dead.

  • @anshumansinha5874
    @anshumansinha5874 2 роки тому

    Why is he making such sounds in between the explanations? That happens when you're thinking while presenting! Try to go through the presentation before recording!

  • @drscott1
    @drscott1 3 роки тому

    Amazing stuff. You were doing great until you started talking about dark matter. Let’s call it what it is ‘ plasma and dust’.