Steve, a question: for a control problem, wouldn't we want an inverse operator -- one that maps the desired output to the control u(t)? Can the paper approach be adopted for that?
Experimentally I've found that stacking all inputs into a single vector and using a vanilla feedforward network is just as good as the deeponet (at least for simple problems)
Apologies for the quibble. But could you post a link for the reference as it seems to be not quite correct. These guys are prolific, so searching on their names returns many papers, and JCP 378 (which is 2019) doesn't contain any papers by them.
Hi Steve, your lessons are excellent, thank you for your help! I was wondering when the set of videos on PINNs would be released since you mention them a lot in some of the videos on Loss Functions, for example.
Very interesting looks like this could work well in control theory. I wonder if this is more generalisable than state based models in control. Also it could be interesting to further split ut into its own net as well.
I am very curious how this compares to reinforcement learning in arriving at optimal control, even for relatively simple scenarios such as a thermostat.
DDSE video series was so good. It had explained code for everything. Would really love it if these videos came with code of implementation and training.
Vivek here - absolutely loved the clear and simple explanations in this video! Keep them coming!
I think there is a small error - the paper was introduced in 2019, not 2023
Steve, a question: for a control problem, wouldn't we want an inverse operator -- one that maps the desired output to the control u(t)? Can the paper approach be adopted for that?
Experimentally I've found that stacking all inputs into a single vector and using a vanilla feedforward network is just as good as the deeponet (at least for simple problems)
Hey, great explanation !
Which paper are you talking about in 12:20 that proved the irrepresentability of chaotic systems
Awesome! Where can I find a simple sample implementation to build upon?
Apologies for the quibble. But could you post a link for the reference as it seems to be not quite correct. These guys are prolific, so searching on their names returns many papers, and JCP 378 (which is 2019) doesn't contain any papers by them.
+ 1 on this
Hi Steve, your lessons are excellent, thank you for your help! I was wondering when the set of videos on PINNs would be released since you mention them a lot in some of the videos on Loss Functions, for example.
clear videos professor!... a big fan of ur lectures from India
Very interesting looks like this could work well in control theory. I wonder if this is more generalisable than state based models in control. Also it could be interesting to further split ut into its own net as well.
I am very curious how this compares to reinforcement learning in arriving at optimal control, even for relatively simple scenarios such as a thermostat.
So essentially we are trying to learn the inverse differential operator?
Very interesting 🎉🎉one of your follower from Pakistan.you are my most favorite teacher ❤
Very interesting 😊
Is it possible to get a copy of slides, figures are so beautiful
DDSE video series was so good. It had explained code for everything. Would really love it if these videos came with code of implementation and training.
Where to find the code for this?
GLU?