Thanks Steve, I have been using your videos as background "reading" for my graduate students. We are in civil engineering so ML is not our core disciplines. I have been wanting to teach them the benefit of physics informed ML for a while now (for CFD-based work in indoor air quality) and this video will be great for them, thanks!
You are by diference the best person explaining which were believed very complex problems and after seeing your las 15 videos they have became very understandable, not because they are easy but only because you're a great profesor. Very good job and regards from Spain.
@12:00 it's not that there are "different forces" in the Galileo experiments, it's that the same forces present have different relative magnitudes for the different object.
That was a wonderful depiction of a difficult problem in today's data-rich learning environment. Thank you for this, and I'll be thinking about the topic for weeks to come!
Great presentation! One of the best videos of this series was saved for last. It casts a different light, much to the point, on the data-based approach for dynamic modeling. I appreciated the choice one must make between precise predictions, provided by ML, and scientific insight, although less precise, provided by first principles. The astronomical examples were well chosen as well as the dropping ball experiments. Just a little precision: Copernic is the father of the heliocentric model, not Kepler.
This also goes to the issue of "interpretability" of machine learning models. The idealized version is more "interpretable". (Also more compact.). But the discrepancy modeling maybe necessary for describing all the incidental factors, esp. when dealing with real engineering scenarios. But having a good "backbone" maybe a good place to start, in combining the best of both worlds.
Very good lesson, thank you Professor! I have learned this part of the course with the professor, and I will review the papers mentioned by the professor again. I hope I can make beautiful results in my field!
Big Thanks to you Steve! I'm currently pursuing my master thesis in the field of measurement technology (Analyzing optical sensor networks) and found out about SVD by your videos. Big help! I really enjoy your lectures and just bought your book.
Thank you for the insight and the lecture. Will it be possible for you to explain in detail the next lecture of how you have done it in practice. It can be just one of your paper but the details will ultimately help us learn and subsequently grow more. Thanks
Hi, kindly adjust the encoded volume for this video. its quite low and i had to turn my device's vol. all the way up (i almost spilled my coffee when one of the ads showed up)
Amazing videos!!... I really love the historical background you have shown.. I would never seen before the copernicus model as a Fourier Transform (getting mainstream due videos explainning the possibility of square orbits in planetary/moon systems), neither the Galileo experiments with ramps and bells to measure the time in an age where clocks were rudimentary... a teacher of mine once said that phisicists use to be musicians, since their natural habillity of keeping track of time, without accurate measure of time there is no kinematics.
Is there an intuitive argument as to why it is easier for machine learning techniques to learn the discrepancy rather than directly the "true model". Thanks :)
Thank you for the insightful video. I was expecting one on PDEs though, as continuation from past weeks. Will those lectures resume? Additionally, I have been following your channel for a few years now (since I came across your control bootcamp series). Thank you again for the great content.
Naive question here - if we are looking at an output of data is there a variance from our expected data that can point to unmeasured variables in the system we are measuring? Can we reliably differentiate between statistical noise and something that requires a deeper understanding of the 'physics'?
Another issue I think will keep always discrepancies are the math tools we currently use to model physics: as the example of ODEs, they cannot stand at the same time having uniqueness of solutions (which is always hold in current models, at least in the simpler ones used in classrooms), with having solutions that will become exactly zero after a finite extinction time (which is what everyone experience in daily life since in our scale things stops moving due friction). As example of what I am meaning, think in the equation: x' = -sgn(x)*sqrt(|x|); x(0)=1 this ODE could have as solution x(t) = 1/4(1-t/2+|1-t/2|)^2 which becomes exactly zero after the time t=2. These kind of solutions cannot be modeled through an ODE which doesn't have at least one point in time where is locally non-Lipschitz, which disscard every Linear ODE of having them, which conversely, always have solutions that are never-ending in time. As a practical example, think in the nonlinear pendulum with friction, where traditionally is used as guesses for the Drag force, or the Drag Equation: F_d(v) = a_1*v^2 or the Stokes' Law F_d(v) = a_2*v where "v" is the speed, and "a_i" are some constants. If instead I use a sublinear damping term like: F_d = a_3*sgn(v)*sqrt(|v|)*(sqrt(2)/4+|v|^(3/2)) I will have a solution that indeed will achieve a finite extinction time, and in this case, this force resembles both the quadratic law and also the Stokes's Law for low speeds (and it does for longer than just a simple quadratic term). The turbulent regimes are out of the scope for this ansatz, but for classic examples it made a good fit. Hope you can try it also on your lab.
I have a question about how to model the discrepancies in the artificial intelligence model: The incorporation of the theoretical model is for trying to speed things up, right?.... my intuition tells me that if the focus is accuracy, it really doesn't matter if you add or not a previous model, since if it doesn't match with accuracy the trinning set examples, the forecasting model will just sustract the initial guess of whatever model the AI is finding... does this make sense?... other variables will be more important for the accuracy, as how much neurons you have, in how many layers, how representative and unbiased your trainning set is, if your neural network has retroalimantation, or if it is a set of neural networks competing with each other, among others technical characteristics of your AI system, more determinant for accuracy than having a good theoretical framework (this focused in accuracy)... but I don't know if this is true... hope you can comment.
As an MMT macroeconomist, I can tell you this works in macroeconomic policy too. The discrepancy is a lot like a buffer stock or automatic stabiliser. I can show you the central banks do not have the tools to stabilize the price level, but the fiscal authority (parliaments) do, they just do not know it, and even if they do they do not want to admit it because it suits neoliberal political insiders to keep workers and small firms in debt and shift the blame onto the feckless central bankers who can't do a thing about it. Unlike a double pendulum or applied CFD model though, this ignorance costs peoples lives and small businesses.
Greetings. I suggest you add some "I'm foo and I do bar" in the youtube channel about page, since this text pops up if your videos page is linked e.g. on Discord. Best, Nikolaj.
Steve.., your videos control my blood, and I always want to hear such kinds of Research dimensions as Geoinfomatices and Hydraulic engineering!! if you give me the opportunity in your research environment I will leave my dream and wants by best version to be like a person like you!!
what about human scale biase effects on the model like how is this variation significant and at what scaledoes it matter like ok at 20m such things happen sothis is aboiut stuff falling from houses physics but does it work from areopanes how do you really model something falling from great hieght and then also very large obeject. at these scale measuring starts to introduce more errors so the model needs tobe mre average?
[Leibniz's contingency argument for God, clarified]: Ten whole, rational numbers 0-9 and their geometric counterparts 0D-9D. 0 and it's geometric counterpart 0D are: 1) whole 2) rational 3) not-natural (not-physical) 4) necessary 1-9 and their geometric counterparts 1D-9D are: 1) whole 2) rational 3) natural (physical) 4) contingent Newton says since 0 and 0D are "not-natural" ✅ then they are also "not-necessary" 🚫. Newton also says since 1-9 and 1D-9D are "natural" ✅ then they are also "necessary" 🚫. This is called "conflating" and is repeated throughout Newton's Calculus/Physics/Geometry/Logic. con·flate verb combine (two or more texts, ideas, etc.) into one. Leibniz does not make these fundamental mistakes. Leibniz's "Monadology" 📚 is zero and it's geometric counterpart zero-dimensional space. 0D Monad (SNF) 1D Line (WNF) 2D Plane (EMF) 3D Volume (GF) We should all be learning Leibniz's Calculus/Physics/Geometry/Logic. Fibonacci sequence starts with 0 for a reason. The Fibonacci triangle is 0, 1, 2 (Not 1, 2, 3). Newton's 1D-4D "natural ✅ = necessary 🚫" universe is a contradiction. Natural does not mean necessary. Similar, yet different. Not-natural just means no spatial extension; zero size; exact location only. Necessary. Newtonian nonsense will never provide a Theory of Everything. Leibniz's Law of Sufficient Reason should be required reading 📚......
Also I would like to add a few comments and questions with the hope you will read them: About the free fall experiments, when testing theoretical laws against accurate measures and its discrepancies, the video gives somehow a sense that theory is mistaken, and here I think you could extend it with an experiment: since there are more forces acting in objects than gravity (air drag in this experiment), there are indeed other effects due this additional force as you have beatifully shown, but this doesn't means that free fall laws are wrong: if you isolate the effects of the air friction you will find a perfect suit of the free fall laws with your data, which is tipically explained in physics courses with a video of the free fall of a hammer and a feather in the moon done by an astrounaut, or a video of feather falling in a vacum tube, both which seems too artificial for the intuition... But if you pick right now the wider book in your office, and a feather in your other hand, and left them fall at the same time, the book will fall first as our intuition tells us through previous experience, but now, do the experiment again by taking the book an placing the feather standing alone over the book cover is pointing upwards, now drop the book and you will see the feather falling as fast as the book, never detaching from it!!.... its like a magic trick for intuition!!!... the feather doesn't get sticked to it by static or suction (both measurables since the feather never stay stick later, neither become flat during the fall), it just follow the free fall law since the air resistance over the feather have been removed. This explains to intuition why these kind of theoretical laws are true, and why they are used in physics, and since are true, it explained also why they are used as basis for more complicated models for accounting other forces that participate in the system if we remove some constraints (like taking into account the air drag effects). Hope you will add this experiment into the video... is formidable easy and effective, and somehow is never used in classrooms.
Thanks Steve, I have been using your videos as background "reading" for my graduate students. We are in civil engineering so ML is not our core disciplines. I have been wanting to teach them the benefit of physics informed ML for a while now (for CFD-based work in indoor air quality) and this video will be great for them, thanks!
This video is not just a good source on machine learning, but also one on the history and philosophy of science. Nice work.
This is now one of my favorite videos.
You are by diference the best person explaining which were believed very complex problems and after seeing your las 15 videos they have became very understandable, not because they are easy but only because you're a great profesor. Very good job and regards from Spain.
yes ... one of the best videos ever seen. simple and highly sophisticated at the same time ...
Mr steve Brunton, you have the best teaching system i have ever seen, keep the great work going!
@12:00 it's not that there are "different forces" in the Galileo experiments, it's that the same forces present have different relative magnitudes for the different object.
That was a wonderful depiction of a difficult problem in today's data-rich learning environment. Thank you for this, and I'll be thinking about the topic for weeks to come!
Great presentation!
One of the best videos of this series was saved for last. It casts a different light, much to the point, on the data-based approach for dynamic modeling. I appreciated the choice one must make between precise predictions, provided by ML, and scientific insight, although less precise, provided by first principles.
The astronomical examples were well chosen as well as the dropping ball experiments. Just a little precision: Copernic is the father of the heliocentric model, not Kepler.
This also goes to the issue of "interpretability" of machine learning models. The idealized version is more "interpretable". (Also more compact.). But the discrepancy modeling maybe necessary for describing all the incidental factors, esp. when dealing with real engineering scenarios. But having a good "backbone" maybe a good place to start, in combining the best of both worlds.
I love watching the double pendulum at the Exploratorium
Very good lesson, thank you Professor! I have learned this part of the course with the professor, and I will review the papers mentioned by the professor again. I hope I can make beautiful results in my field!
That's a really good point about knowing what kind of model you want. Really makes me reconsider some ideas I've had about projects to tackle.
Big Thanks to you Steve! I'm currently pursuing my master thesis in the field of measurement technology (Analyzing optical sensor networks) and found out about SVD by your videos. Big help! I really enjoy your lectures and just bought your book.
Thank you Steve. Would you make a video about the methods for calculating those models?
Thank you for the insight and the lecture. Will it be possible for you to explain in detail the next lecture of how you have done it in practice. It can be just one of your paper but the details will ultimately help us learn and subsequently grow more.
Thanks
You’re awesome! i wanna watch all your videos
Greetings from Brazil
Hi Steve I'm looking to use this approach to model orbital decay of low earth orbiting vehicles. thanks!
Hi, kindly adjust the encoded volume for this video. its quite low and i had to turn my device's vol. all the way up (i almost spilled my coffee when one of the ads showed up)
Amazing videos!!... I really love the historical background you have shown.. I would never seen before the copernicus model as a Fourier Transform (getting mainstream due videos explainning the possibility of square orbits in planetary/moon systems), neither the Galileo experiments with ramps and bells to measure the time in an age where clocks were rudimentary... a teacher of mine once said that phisicists use to be musicians, since their natural habillity of keeping track of time, without accurate measure of time there is no kinematics.
Is there an intuitive argument as to why it is easier for machine learning techniques to learn the discrepancy rather than directly the "true model". Thanks :)
Thank you for the insightful video. I was expecting one on PDEs though, as continuation from past weeks. Will those lectures resume?
Additionally, I have been following your channel for a few years now (since I came across your control bootcamp series). Thank you again for the great content.
It was a ode. 😉
The stuff he's researching is of absolute importance.
Naive question here - if we are looking at an output of data is there a variance from our expected data that can point to unmeasured variables in the system we are measuring? Can we reliably differentiate between statistical noise and something that requires a deeper understanding of the 'physics'?
Great video. Isn't a solution to use SINDY and then floor the coefficient values less than some threshold value?
To be fair, we need to know what Steve's car is to judge the new rig.
:D
Thank you so much for your great lecture!
Hi, my university uses your videos as reference and i have to say theyre really good explained thanks!
imagine paying $100,000 for UA-cam videos
@@theastuteangler He may/may not be US based. Note the use of word "University" and not "College".
You remind me of Harrison Wells from the Flash :)
Thank you for the lecture!
love your videos steve. I had to give a seminar on unsupervised ML and i gave your datascience book as one of my references ^_^
What is a good resource to read about group sparsity?
How do you shoot these videos? Standing behind a half-mirror onto which the visuals are projected from the front?
Another issue I think will keep always discrepancies are the math tools we currently use to model physics: as the example of ODEs, they cannot stand at the same time having uniqueness of solutions (which is always hold in current models, at least in the simpler ones used in classrooms), with having solutions that will become exactly zero after a finite extinction time (which is what everyone experience in daily life since in our scale things stops moving due friction).
As example of what I am meaning, think in the equation:
x' = -sgn(x)*sqrt(|x|); x(0)=1
this ODE could have as solution
x(t) = 1/4(1-t/2+|1-t/2|)^2
which becomes exactly zero after the time t=2.
These kind of solutions cannot be modeled through an ODE which doesn't have at least one point in time where is locally non-Lipschitz, which disscard every Linear ODE of having them, which conversely, always have solutions that are never-ending in time.
As a practical example, think in the nonlinear pendulum with friction, where traditionally is used as guesses for the Drag force, or the Drag Equation:
F_d(v) = a_1*v^2
or the Stokes' Law
F_d(v) = a_2*v
where "v" is the speed, and "a_i" are some constants.
If instead I use a sublinear damping term like:
F_d = a_3*sgn(v)*sqrt(|v|)*(sqrt(2)/4+|v|^(3/2))
I will have a solution that indeed will achieve a finite extinction time, and in this case, this force resembles both the quadratic law and also the Stokes's Law for low speeds (and it does for longer than just a simple quadratic term).
The turbulent regimes are out of the scope for this ansatz, but for classic examples it made a good fit.
Hope you can try it also on your lab.
Excellent!
Very beautiful...
wonderful
I have a question about how to model the discrepancies in the artificial intelligence model: The incorporation of the theoretical model is for trying to speed things up, right?.... my intuition tells me that if the focus is accuracy, it really doesn't matter if you add or not a previous model, since if it doesn't match with accuracy the trinning set examples, the forecasting model will just sustract the initial guess of whatever model the AI is finding... does this make sense?... other variables will be more important for the accuracy, as how much neurons you have, in how many layers, how representative and unbiased your trainning set is, if your neural network has retroalimantation, or if it is a set of neural networks competing with each other, among others technical characteristics of your AI system, more determinant for accuracy than having a good theoretical framework (this focused in accuracy)... but I don't know if this is true... hope you can comment.
As an MMT macroeconomist, I can tell you this works in macroeconomic policy too. The discrepancy is a lot like a buffer stock or automatic stabiliser. I can show you the central banks do not have the tools to stabilize the price level, but the fiscal authority (parliaments) do, they just do not know it, and even if they do they do not want to admit it because it suits neoliberal political insiders to keep workers and small firms in debt and shift the blame onto the feckless central bankers who can't do a thing about it. Unlike a double pendulum or applied CFD model though, this ignorance costs peoples lives and small businesses.
Can you point me to the models or a way to get more information about this?
Greetings. I suggest you add some "I'm foo and I do bar" in the youtube channel about page, since this text pops up if your videos page is linked e.g. on Discord. Best, Nikolaj.
love it
Steve.., your videos control my blood, and I always want to hear such kinds of Research dimensions as Geoinfomatices and Hydraulic engineering!!
if you give me the opportunity in your research environment I will leave my dream and wants by best version to be like a person like you!!
what about human scale biase effects on the model like how is this variation significant and at what scaledoes it matter like ok at 20m such things happen sothis is aboiut stuff falling from houses physics but does it work from areopanes how do you really model something falling from great hieght and then also very large obeject. at these scale measuring starts to introduce more errors so the model needs tobe mre average?
[Leibniz's contingency argument for God, clarified]:
Ten whole, rational numbers 0-9 and their geometric counterparts 0D-9D.
0 and it's geometric counterpart 0D are:
1) whole
2) rational
3) not-natural (not-physical)
4) necessary
1-9 and their geometric counterparts 1D-9D are:
1) whole
2) rational
3) natural (physical)
4) contingent
Newton says since 0 and 0D are
"not-natural" ✅
then they are also
"not-necessary" 🚫.
Newton also says since 1-9 and 1D-9D are "natural" ✅
then they are also
"necessary" 🚫.
This is called "conflating" and is repeated throughout Newton's Calculus/Physics/Geometry/Logic.
con·flate
verb
combine (two or more texts, ideas, etc.) into one.
Leibniz does not make these fundamental mistakes.
Leibniz's "Monadology" 📚 is zero and it's geometric counterpart zero-dimensional space.
0D Monad (SNF)
1D Line (WNF)
2D Plane (EMF)
3D Volume (GF)
We should all be learning Leibniz's Calculus/Physics/Geometry/Logic.
Fibonacci sequence starts with 0 for a reason. The Fibonacci triangle is 0, 1, 2 (Not 1, 2, 3).
Newton's 1D-4D "natural ✅ =
necessary 🚫" universe is a contradiction.
Natural does not mean necessary. Similar, yet different.
Not-natural just means no spatial extension; zero size; exact location only. Necessary.
Newtonian nonsense will never provide a Theory of Everything.
Leibniz's Law of Sufficient Reason should be required reading 📚......
Also I would like to add a few comments and questions with the hope you will read them:
About the free fall experiments, when testing theoretical laws against accurate measures and its discrepancies, the video gives somehow a sense that theory is mistaken, and here I think you could extend it with an experiment: since there are more forces acting in objects than gravity (air drag in this experiment), there are indeed other effects due this additional force as you have beatifully shown, but this doesn't means that free fall laws are wrong: if you isolate the effects of the air friction you will find a perfect suit of the free fall laws with your data, which is tipically explained in physics courses with a video of the free fall of a hammer and a feather in the moon done by an astrounaut, or a video of feather falling in a vacum tube, both which seems too artificial for the intuition... But if you pick right now the wider book in your office, and a feather in your other hand, and left them fall at the same time, the book will fall first as our intuition tells us through previous experience, but now, do the experiment again by taking the book an placing the feather standing alone over the book cover is pointing upwards, now drop the book and you will see the feather falling as fast as the book, never detaching from it!!.... its like a magic trick for intuition!!!... the feather doesn't get sticked to it by static or suction (both measurables since the feather never stay stick later, neither become flat during the fall), it just follow the free fall law since the air resistance over the feather have been removed. This explains to intuition why these kind of theoretical laws are true, and why they are used in physics, and since are true, it explained also why they are used as basis for more complicated models for accounting other forces that participate in the system if we remove some constraints (like taking into account the air drag effects).
Hope you will add this experiment into the video... is formidable easy and effective, and somehow is never used in classrooms.
cool
ترجمه فارسی