At 18:53, the eigenvector is defined as \phi(x) as a function of x which I do not understand. Given you have found the Koopman operator in a previous step, wouldn't the eigenvectors \phi be fixed, because matrix K is fixed. Then the only variables that are dependent on x are the Koopman modes v_k. These modes would then determine "how much" of every eigenvector is present given a certain x?
the problem with this lecture is that there is insufficient information and the methodology does not match the problem that needs to be address before asking any questions otherwise it would be premature like being biased on who will win a marathon without evaluating all the data from each person what we need is information that can be applied for the given problems we face in our own situation that needs to be resolved if we are to progress with our goals of what we work to achieve whatever that may be overall lecture 5/10
@Mrbheijden The eigenfunctions \phi(x) are functions of the state space, like all observables. They're special observables that evolve like e^{\lambda t} (for complex \lambda) in time along a dynamical trajectory. The Koopman operator evolves any observable in time (along those dynamics). The eigenfunctions and their eigenvalues depend on K, yes, and are inherent to the dynamics, but that doesn't make them independent of x. They are strongly dependent on x: in order to push the nonlinearity out of the time dynamics (so you get e^{\lambda t} behavior), you need to push the nonlinearity into the functional dependence on x. The modes v_k, in turn, depend on the observable you're working with (they're the projection of an observable onto some eigenfunction.) They're "how much" of every eigenfunction is present _in a given observable_, not at a certain x. It's a strange shift in perspective: we're not working on the state space anymore, we're "lifting" to a function space, the space of observables, and the Koopman operator acts _on that function space_, not directly on the state space. @@kentheengineer592 Nathan Kutz is an expert in this, and he's applying Koopman theory in a new area here (PDEs) that hasn't really been explored much yet. This is significant work. Koopman theory in general is very unintuitive and weird, but trust me, it's very powerful, it's worth learning. If you have data for some dynamical system, this can tell you very significant things about its underlying structure. Give the lecture a few more watches and read some of his papers.
nice!
At 18:53, the eigenvector is defined as \phi(x) as a function of x which I do not understand. Given you have found the Koopman operator in a previous step, wouldn't the eigenvectors \phi be fixed, because matrix K is fixed. Then the only variables that are dependent on x are the Koopman modes v_k. These modes would then determine "how much" of every eigenvector is present given a certain x?
the problem with this lecture is that there is insufficient information and the methodology does not match the problem that needs to be address before asking any questions otherwise it would be premature like being biased on who will win a marathon without evaluating all the data from each person what we need is information that can be applied for the given problems we face in our own situation that needs to be resolved if we are to progress with our goals of what we work to achieve whatever that may be overall lecture 5/10
@Mrbheijden The eigenfunctions \phi(x) are functions of the state space, like all observables. They're special observables that evolve like e^{\lambda t} (for complex \lambda) in time along a dynamical trajectory. The Koopman operator evolves any observable in time (along those dynamics). The eigenfunctions and their eigenvalues depend on K, yes, and are inherent to the dynamics, but that doesn't make them independent of x. They are strongly dependent on x: in order to push the nonlinearity out of the time dynamics (so you get e^{\lambda t} behavior), you need to push the nonlinearity into the functional dependence on x. The modes v_k, in turn, depend on the observable you're working with (they're the projection of an observable onto some eigenfunction.) They're "how much" of every eigenfunction is present _in a given observable_, not at a certain x. It's a strange shift in perspective: we're not working on the state space anymore, we're "lifting" to a function space, the space of observables, and the Koopman operator acts _on that function space_, not directly on the state space.
@@kentheengineer592 Nathan Kutz is an expert in this, and he's applying Koopman theory in a new area here (PDEs) that hasn't really been explored much yet. This is significant work. Koopman theory in general is very unintuitive and weird, but trust me, it's very powerful, it's worth learning. If you have data for some dynamical system, this can tell you very significant things about its underlying structure. Give the lecture a few more watches and read some of his papers.