Deep Learning to Discover Coordinates for Dynamics: Autoencoders & Physics Informed Machine Learning

Поділитися
Вставка
  • Опубліковано 14 тра 2024
  • Joint work with Nathan Kutz: / @nathankutzuw
    Discovering physical laws and governing dynamical systems is often enabled by first learning a new coordinate system where the dynamics become simple. This is true for the heliocentric Copernican system, which enabled Kepler's laws and Newton's F=ma, for the Fourier transform, which diagonalizes the heat equation, and many others. In this video, we discuss how deep learning is being used to discover effective coordinate systems where simple dynamical systems models may be discovered.
    Citable link for this video at: doi.org/10.52843/cassyni.4zpjhl
    @eigensteve on Twitter
    eigensteve.com
    databookuw.com
    Some useful papers:
    www.pnas.org/content/116/45/2... [SINDy + Autoencoders]
    www.nature.com/articles/s4146... [Koopman + Autoencoders]
    arxiv.org/abs/2102.12086 [Koopman Review Paper]
    This video was produced at the University of Washington
  • Наука та технологія

КОМЕНТАРІ • 105

  • @liamtsai2179
    @liamtsai2179 2 роки тому +29

    YT algorithm does know where to take me, never thought i'd sit through a lecture in my leisure time fully engaged. Very well done!

  • @AICoffeeBreak
    @AICoffeeBreak 2 роки тому +123

    Knowing a lot about autoencoders already, it is useful to see how they start to dissipate into other research areas, like physics (my favorite!). Great to see a good explanation of ML as a tool for further discovery. Thanks for this video!

    • @wibulord926
      @wibulord926 Рік тому +1

      cant belive see you here, your vidieo is helpful too thanks you alot.

  • @doganbirol13
    @doganbirol13 2 роки тому +16

    I might just have found my research topic for my master's. Fascinating, thanks. Besides that, the quality of the video deserves remarks: Dark background which is good for eyes, persistently high quality graphics, and a narrator who does his best to create understanding with a decent use of English.

  • @aidankennedy6973
    @aidankennedy6973 2 роки тому +12

    Incredible work your team is doing. So much to think about, with incredibly wide ranging applications

  • @marioskokmotos8274
    @marioskokmotos8274 2 роки тому +8

    Awesome work! Thanks for sharing in such a digestible way! I feel we cannot even start to imagine in how many different fields this approach could be used.

  • @gammaian
    @gammaian 2 роки тому +3

    Your channel is incredible Prof. Brunton, thank you for your work! There is so much value here

  • @danberm1755
    @danberm1755 Рік тому

    Fantastic discussion! Love that you cover the complexities so in-depth.

  • @HeitorvitorC
    @HeitorvitorC 2 роки тому +3

    Thank you for your videos, Steve! Also, your gesticulation eases the complexity of your talk significantly. Keep up with the good work!

  • @jimlbeaver
    @jimlbeaver 2 роки тому +4

    This is the most amazing stuff you guys have came up with so far!!! Awesome…great job.

  • @albertocaballero7922
    @albertocaballero7922 2 роки тому +1

    Awesome work. I can't believe I understood most of this topic. One of the best explanations I have seen so far.

  • @jessegibson3548
    @jessegibson3548 2 роки тому +2

    Thank you for this vid. Really great content you are putting out for the community Steve.

  • @lablive
    @lablive 2 роки тому

    I'm lucky to meet this work positioned between the 3rd and 4th science paradigms. As mentioned at the end of this video, I think the key to the interpretability is to take advantage of inductive biases described as existing models or algorithms for forward/inverse problems to design the encoder, decoder, and loss function.

  • @Ejnota
    @Ejnota 2 роки тому

    how much i love this videos and the quality of the software they use

  • @iestynne
    @iestynne 2 роки тому +2

    This was a super interesting one. Thank you very much for another engaging whirlwind tour through recent advances in computer science! :)

  • @__-op4qm
    @__-op4qm 2 роки тому

    very kindly structured explanations like this can make everyone feel welcome and interested) This is exactly why subbed to this channel almost 2 years ago; all the videos are very, inviting, welcoming and by the end leave a calm sense of curiosity balanced with a pinch of reassurance, free of any unnecessary panic. In other places these types of subjects are often presented with a thick padding of jargon and dry math abstractions, but not here. Here the explanations are distilled into a sparse latent form without loss of generality and with a clear reminder of the real life value of these methods.

  • @MaxHaydenChiz
    @MaxHaydenChiz 2 роки тому

    This is a really good video. Really well explained and it let me see how your field was using this tech. Thanks for posting it. It sounds like you are doing a lot of interesting research. I'll keep an eye on your channel now that the algorithm recommended it to me.

  • @diegocalanzone655
    @diegocalanzone655 2 роки тому

    Brought here by YT algorithm while finishing my BS thesis on non-phsysics-informed auto-encoders to learn from Shallow Water Equations. I will definitely dedicate further studies on the lecture content. Thanks!

  • @jinghangli623
    @jinghangli623 Рік тому

    I've been looking for some insights on how to leverage deep learning to optimize our MRI transmit coil. This has been extremely helpful

  • @skeletonrowdie1768
    @skeletonrowdie1768 2 роки тому +2

    thanks so much! this definitely helped me get into deep learning dynamical systems. I am working on a problem where I want to classify the state of a viral particle near a membrane. I transformed a lot of simulation frames into structural descriptors. I am at the point where I need to decide on an architecture and loss functions to learn. I have begun naively with a dense neural network. This however seems very interesting, not directly but it could be another input for the DNN. The Z could be describing certain constant dynamics surrounding the viral particle which could help classify the state. Anyway, thanks a lot!

  • @jeroenritmeester73
    @jeroenritmeester73 2 роки тому +5

    Hi Steve, very interesting video. One remark on the slides that you use: I tend to watch videos with closed captions despite me having average hearing because it helps me keep track of what you're saying. I can imagine that people with hear impairments will also do this, but sometimes elements on your slides will overlap with UA-cam's space for subtitles, like the derivative at 1:45. Perhaps this is something you could take into account, particularly for slides that do not contain many different elements and allow for scaling. Thanks again.

  • @zhanzo
    @zhanzo 2 роки тому +4

    I wish I was able press the like button more than once.

  • @drskelebone
    @drskelebone 2 роки тому

    I will always love that the simple solution was just returned as the simple solution. :D

  • @dr.mikeybee
    @dr.mikeybee 2 роки тому

    I've just been learning about how to use PCA to reduce dimensionality. Now I see one can go further and learn the meaning of the linear combination at the bottleneck. I don't really understand how one can use additional loss functions to find that meaning, but now I know it can be found. I'll need to think about it. Thank you.

  • @AliRashidi97
    @AliRashidi97 2 роки тому

    Great lecture . Thanks a lot 🙏

  • @rockapedra1130
    @rockapedra1130 10 місяців тому +1

    Nice but would love to see some demos of the results. For example, the equation of the pendulum, the reconstruction from the found dynamics and comparison between the two.

  • @PedrossaurusRex
    @PedrossaurusRex 2 роки тому

    Amazing lecture!

  • @have_a_nice_day399
    @have_a_nice_day399 2 роки тому

    Thank you for the amazing video. Would you please give a few simple examples and explain step by step of how to use these machine learning algorithms?

  • @alfcnz
    @alfcnz 2 роки тому +7

    Cool, nice lecture! 🤓🤓🤓

  • @weeb3277
    @weeb3277 2 роки тому +1

    Very esoteric video.
    I like. 👍

  • @leonardromano1491
    @leonardromano1491 2 роки тому

    Nice video! I am very new to this subject (In fact this is the first video I have seen about it), but it seems that essentially what you do is derive dynamics from an action principle (minimizing the generalized loss functional) and so any partially known physics I suppose would just be incorporated by Lagrange multipliers. About the two different approaches for linearisation (going to higher and lower dimension), I think that both are physically motivated. You can definitely expect dynamics to become more linear if you go to higher dimension too. Think about thermodynamics: You can either try to describe average degrees of freedom like entropy, heat, etc. which would follow easy laws, but at the same time you could try and describe the system by describing each individual particle. It wouldn't really be feasible, but it's not unlikely that the dynamics can be described from a simple possibly linear law (like a box full of free collisionless particles in a homogeneous gravitational field).

  • @ArxivInsights
    @ArxivInsights 2 роки тому

    Fantastic video!!

  • @AA-gl1dr
    @AA-gl1dr 2 роки тому

    Thank you so much!

  • @johnsalkeld1088
    @johnsalkeld1088 2 роки тому +3

    Do you have your presentation available on line? Or links to the arxiv site for the papers referenced? I would love to read them

  • @ernstuzhansky
    @ernstuzhansky 4 місяці тому

    This is very cool!

  • @majstrstych15
    @majstrstych15 2 роки тому

    Hey Steve, your videos are great! I wanna ask how can the balanced model reduction be used in the deep learning autoencoder. I'm asking, because with the BML you are able to find the coordinate transformation to equalize and diagonalize the Gramians, but this transformation could turn out to be dense and non-interpretable, right? Could you please explain what would be the advantage of combining these two? Thanks, your big fan!

  • @netoskin
    @netoskin Рік тому

    Amazing!!

  • @marjankrebelj4007
    @marjankrebelj4007 2 роки тому

    I saw the thumbnail and the title and I assumed this was a course on encoding audio (dynamics) for movie editing. :)

  • @johnsalkeld1088
    @johnsalkeld1088 2 роки тому +5

    The linear areas seem to be a maximising of the neighbourhoods implied by the implicit function theory - i am probably wrong it was 1987 when i studied this

  • @eerturk
    @eerturk 2 роки тому

    Thank you.

  • @AllanMedeiros
    @AllanMedeiros 2 роки тому

    Fantastic!

  • @spencermarkowitz2699
    @spencermarkowitz2699 Рік тому

    so amazing

  • @vine6666
    @vine6666 2 роки тому

    Just curious whether your usage of the term "lift" is related to the topological/categorical use of that term? Specifically whenever there is a morphism f: X -> Y and g: Z -> Y then a lift is a map h: X -> Z such that f = gh (i.e. the diagram commutes).
    I think the analogy works: Let X be the original data space, Z the latent space, and Y = X. The composition gh is a map X -> Z -> X, if we set f = the identity on X, then h and g are the encoder and decoder, then f ≈ gh expresses the reconstruction objective.

  • @weert7812
    @weert7812 2 роки тому +4

    Do you know of any jupyter notebook examples in say Keras or Pytorch that give an example of how to do this?

  • @krishnaaditya2086
    @krishnaaditya2086 2 роки тому

    Awesome Thanks!

  • @user-uy6bo6il4q
    @user-uy6bo6il4q 10 місяців тому

    I tried to use autoencoder to do Anomsly detection for anti-fraud task in social media.It's a good way to do information compression.But I never thought it can be used in model discovery for science! AI will change the game of Science research today!

  • @andersonmeneses3599
    @andersonmeneses3599 2 роки тому

    Thanks! 👍🏼

  • @beauzeta1342
    @beauzeta1342 2 місяці тому

    Thank you professor for the very inspiring video! At 12:05, can we say something about the uniqueness of the representation transform phi and psi? Or they may not be unique at all, and may depend on how we train the network?

  • @joseantoniogambin9609
    @joseantoniogambin9609 2 роки тому

    Awesome!

  • @SaonCrispimVieira
    @SaonCrispimVieira 2 роки тому +19

    Professor Brunton, thanks to you and team mates for the amazing content. I think it is desirable to correct the pendulum videos, because the images are affected by an affine transformation due to the lens distortions, looking to the botton video line you can se how distorted it is. There are libraries to identify the parameters of the camera affine transformation using a chessboard tracking the corners coordinates distortion.

    • @alfcnz
      @alfcnz 2 роки тому +6

      You can easily factor the affine transformation in the encoder (and the inverse one in the decoder). You don't always have access to distortion correction settings, and as long as you've been using the same capturing equipment, you will be able to factor such transformations during training.

    • @SaonCrispimVieira
      @SaonCrispimVieira 2 роки тому +1

      @@alfcnz Professor Canziani, it's amazing to have your answer here, in a way I'm your virtual machine learning student on youtube! Thanks a lot you and your team mates for the amazing content.
      I totally agree, especially when it comes to a linear transformation that would be easily understood by the network, my biggest concern is that this distortion could be wrongly treated as the problem physics, being more of an observational error, especially when linearity is enforced in the dynamics discovery.

    • @maythesciencebewithyou
      @maythesciencebewithyou 2 роки тому

      @@alfcnz But if you trained it on distorted image data, wouldn't it make a false correction to undistorted image data?

    • @SaonCrispimVieira
      @SaonCrispimVieira 2 роки тому

      @@iestynne Its is not difficult to calibrate the camera!

  • @FromaGaluppo
    @FromaGaluppo 2 роки тому

    Amazing

  • @meetplace
    @meetplace 6 місяців тому

    @3:30 If Steve Brunton says something is "a difficult task", you can be sure it really is a difficult task! :D

  • @radenmuaz7125
    @radenmuaz7125 2 роки тому

    How do you deal with external control input u(t) for control problems and robots,
    Maybe called exogenous inputs.

  • @mattkafker8400
    @mattkafker8400 2 роки тому

    Tremendous video!

  • @niccologiovenali7597
    @niccologiovenali7597 Рік тому

    you are the best

  • @vitorbortolin6810
    @vitorbortolin6810 2 роки тому

    Great!

  • @vyacheslavboyko6114
    @vyacheslavboyko6114 2 роки тому +1

    23:32 sounds interesting. So you say this is a way to learn the linearizing transform for the convective term of the Navier-Stocks Eq? How do you even know if, after training the network, we end up with a meaningful solution?

    • @iestynne
      @iestynne 2 роки тому

      You might not. Sara Hooker has recently been arguing that properties like accuracy and interpretability (among others) may direct conflict; so the better one is, the worse the others are. You might have to sacrifice a 'meaningful' solution for an accurate one.

  • @rrr33ppp000
    @rrr33ppp000 2 роки тому +1

    YES

  • @haydergfg6702
    @haydergfg6702 2 роки тому

    Thank you alot i hope share with apply by cod

  • @kawingchan
    @kawingchan 2 роки тому

    Many non linear system exhibit phenomenon of chaos (divergence in the “original” coord if 2 systems have tiny diff in their init condition), would be interested to see if the “recovered” x_\hat should also reproduce the chaotic behavior with that same Lyapunov expononent, and also what should happen to the latent z’s.

    • @hfkssadfrew
      @hfkssadfrew 2 роки тому

      First question, they do. It is validated in 1990-2000 where numerous engineers and mathematicians play shallow neural network. Second, I don’t have an answer.

  • @tharunsankar4926
    @tharunsankar4926 2 роки тому

    How would we train a network like this though?

  • @frankdelahue9761
    @frankdelahue9761 2 роки тому

    Deep learning is revolutionizing engineering, along with Exascale supercomputing.

  • @drskelebone
    @drskelebone 2 роки тому +8

    Is Steve quiet for everyone? I've been in conferences all week, so I might be set up wrong, but I had to reverse twice to get a clean vocal.

    • @jeroenritmeester73
      @jeroenritmeester73 2 роки тому

      It's fine for me on mobile

    • @user255
      @user255 2 роки тому

      I had to turn up volume quite high, but now hearing just fine.

  • @Anujkumar-my1wi
    @Anujkumar-my1wi 2 роки тому

    In wikipedia ,state variables are reffered to as the varibles that describes the mathematical state of the system and state as something that descirbes the system ,but isn't state is the minimum set of variables that describes the system
    wikipedia article link : en.wikipedia.org/wiki/State_variable
    And also ,I want to ask is there any difference between configuration of a system and state of a system?

    • @vg5028
      @vg5028 2 роки тому

      Yes, your understanding of state variables is correct. Sometimes its useful to make a distinction between state variables and a "minimum set" of state variables. State variables are anything that give you information about the state of the system -- it doesn't always have to be a minimal set.
      In my experience "configuration" and "state" are similar terms but I could be wrong about that.

    • @Anujkumar-my1wi
      @Anujkumar-my1wi 2 роки тому

      @@vg5028 yes but isn't state is referred to as minimum set of varibles that completly desctibes the system(those minumun set of varibles i.e state varibales) but in wikipedia state is referred to as something that describes the system and state variable are something that describes the state of the system but isn't here state was reffered as minumun set of varibales i.e state variables?

    • @Anujkumar-my1wi
      @Anujkumar-my1wi 2 роки тому

      @@vg5028 Well,my question is that,why the definition of state is different in this article by mit :web.mit.edu/2.14/www/Handouts/StateSpace.pdf
      and in this wikipedia article:en.wikipedia.org/wiki/State_variable

    • @hfkssadfrew
      @hfkssadfrew 2 роки тому

      You asked a GREAT question. Think about this, you have a system variable of 2 state, one always is around 0.00001, the other is around -1 to 1. So you will tend to believe this system is approximately 1D. But mathematically, your understanding is 100% right. it is 2 degree and no less, but you can think it as 1D which brings you a lot of easy life, if you are in the business of modeling and control!

    • @Anujkumar-my1wi
      @Anujkumar-my1wi 2 роки тому

      @@hfkssadfrew what i am asking is what 'state' is whether its referring to that condition of the system or referring to the mathematical description of the system?

  • @yoavzack
    @yoavzack 2 роки тому +1

    Imagine using this to represent a human brain in a low dimentional space.

    • @__-op4qm
      @__-op4qm 2 роки тому +1

      probably boils down to 2D ('amount of tasty pizza' x 'amount of tasty bacon') quite precisely. [If even one training example involves brain data in response to pineapple pizza, the gradient instantly explodes, coffee levitates onto keyboard and alien police come to remove pineapple away from pizza, just in time before a black hole forms turning milky-way into a Lorenz attractor.]

  • @toastyPredicament
    @toastyPredicament 2 роки тому

    No this is good

  • @marku7z
    @marku7z 2 роки тому

    How do I compute the x dot in case of x are pixels?

    • @__-op4qm
      @__-op4qm 2 роки тому

      probably for each pixel separately in 1D by simple Euclidean gradient dx/dt, because the joint underlying function over all pixes is unknown (neural network needs to learn those correlations from examples).

  • @NozaOz
    @NozaOz 2 роки тому

    Could someone help me? I’m a student fresh out of high school, I’ve got an Australian-HSC-education in Chemistry, physics and extension 2 maths, I intend on studying physics at university and possibly getting a minor in CS to give me the marketable skills. I’m currently just doing simple things like a code academy course on Python and likely the machine learning skill path. From where I am now, where do I go to understand this video?

    • @mohdnazarudin2636
      @mohdnazarudin2636 2 роки тому

      to understand the video, coding is useless, it is not gonna help.
      you need to understand linear algebra, dynamical system or ODE/PDE, and also the math for neural network. take course in those subjects.

  • @JohnWasinger
    @JohnWasinger 2 роки тому

    Single Value Decomposition / Principal Components Analysis / Proper Orthogonal Decomposition
    (field? / field? / field?)

    • @zeydabadi
      @zeydabadi 2 роки тому

      Am I right that he implied that all those three are the same?

    • @JohnWasinger
      @JohnWasinger 2 роки тому

      @@zeydabadi you’re right, they are. I was wondering if certain fields prefer one term over another.

  • @MrHardgabi
    @MrHardgabi Рік тому

    waouh, cool but complex, not sure if it could be simplified a bit

  • @huyvuquang2041
    @huyvuquang2041 Рік тому

    Anybody have a feeling like me? Learning math and science with Harrison Well?

  • @user-gj6cw6yc8s
    @user-gj6cw6yc8s 2 місяці тому

    😊 I don't know if computers are capable of deep learning
    Like I just explained our type of learning
    It don't come from all your
    Function boards
    The details that you place in it are your details
    I can't live your life my friend
    And your computer will never know what I'm trying to say
    Unless we were being straight but you don't have a straight life
    I doubt you make a completely straight computer
    ...😊 It's personal
    To understand your construction modeling
    You see the thing about my life it is not orchestrated by your construction modeling
    😊 Even if I had my own chance
    ...
    Sometimes the facts ain't even facts... if it ain't even there
    What could be what won't be
    That's really not your prediction
    😊 Unless it's within your case to understand
    😮 Most people don't have these matters and they only predict
    😊 Try to be the cause and effect of them
    Before you predict in the middle of them
    .... Even if predictions are such outcasts
    😊 Even the teacher's pet taught us that
    ... I won't even use the word persuasions
    ..... You see a computer has to modify itself to each and every case of individual and the life and standards that they have to live by
    To understand them
    You will never help them
    By a parents point of view
    You got to take the strong considerations of their wrongs
    ....
    Their point of views
    Were there aiming what they can what they can't
    I don't need a computer that says well I can't do that I won't learn that
    😊 That's what my professor at MIT told me
    If I can't do that I won't work on that
    😊 I said okay you will give me a computer just the same
    .....
    😊 Logically I am correct
    But like I said that's a prediction
    I am careful about my predictions
    Because what is important to you is the same that is important to me it's just not important to you to give it to me as much as it was important to just keep it to yourself
    😊 I'm a man of discoveries and I can't help but run my mouth
    😮 But you're a man with a job and you got nothing else to learn
    ....😊 We did meet in the middle
    😮 I can't help it you're going the other way
    😊 Maybe I'm stupid
    Look we met back in the middle
    😢 Call it even damn it

  • @--JYM-Rescuing-SS-Minnow
    @--JYM-Rescuing-SS-Minnow 2 роки тому

    wow! this is so fun! I think I made it 2 somewhere, in this switchboard of bowties! I don't know whether 2 call this ''at&t,how can I help U"! or. land of confusion, in deep thought flow's? ha..ha.. yes, my attempt @ humor! thanks so much 4 the lesson! totally love this! good luck!

  • @Tyler-pj3tg
    @Tyler-pj3tg Рік тому

    AI to learn how many black shirts Steve Brunton has

  • @user-gj6cw6yc8s
    @user-gj6cw6yc8s 2 місяці тому

    You got to be worried about the wrong point of view you feed a computer
    😊 As a human we don't make the mistakes
    😊 We necessarily know or know what we need or what is needed to be added
    ....😊 Sometimes no potential strains there
    😊 Sometimes we don't have such qualifications as a qualification
    😮 Even if you are not qualified a human will work you into qualified
    Leave it up to a computer
    😊 You won't be qualified for s***

  • @ArbaouiBillel
    @ArbaouiBillel 2 роки тому

    AI has gone through a number of AI winters because people claimed things they couldn't deliver

  • @tag_of_frank
    @tag_of_frank 2 роки тому +2

    First 9 minutes can be summarized with this sentence: "There exists a neural network which can perform SVD."

    • @hfkssadfrew
      @hfkssadfrew 2 роки тому

      Lol. You can say “there exists a polynomial which can approximately perform any operation”. If you think so, then you still don’t get the point.

    • @tag_of_frank
      @tag_of_frank 2 роки тому

      @@hfkssadfrew I think the point is after minute 9.

  • @laxibkamdi1687
    @laxibkamdi1687 2 роки тому

    Sound really hard

  • @user-gj6cw6yc8s
    @user-gj6cw6yc8s 2 місяці тому

    😊 next thing you know we got crooked computers
    😊 Last time I checked there's not a f****** game on this computer that the game does not f****** cheat or can it play f****** digitally Fair
    😊 Ever since they made one f****** computer program
    You can never trust a f****** poker cards ever again
    😊 I don't want to play with your computer
    😊 For one it does not know how to f****** shuffle
    😊 And for two it don't know how to stop looking at my f****** cards

  • @nerdomania24
    @nerdomania24 2 роки тому

    inventing my own math, from ground up and have no problem with physical systems and AI, you just have to make metrics emergent from a sack of infinite amount of Differential forms and just pick one until the metric of selfmanifistation won't be statistically correlated.

  • @gtsmeg3474
    @gtsmeg3474 2 роки тому

    audio is sooo low WTF