I have a potentially stupid question, but I'm just a drop out who's learning on their own trying to make it lol........ anyways.... would it be reasonable to think of a normalizing flow model similar to a stochastic optimal control problem, such as using the hamilton jacobi bellman equation... for say, defining a dynamic strategy in a financial market, and defining its behavior to be relative to the distributions the market creates as it plays out, and essentially learn the optimal policy such that as the distribution of the market evolves, our relative execution evolves similarly, such that the end result of our execution will resemble the p(z) we defined to be desirable, and we got there by transforming the markets dists p(x). Does that sound anything close to ballpark? Or would it actually be more like, setting up the model as a normalizing flows model, and using a hamilton jacobi bellman set up to optimize and train said model?
So i went back to the beginning and I'm gonna try this again. So basically here is the idea.... we have some desired distribution that we want and/or we want to use, and that is p(z) ...... and then using the math, and using our data, we can run our data thru the math, and find those CDFs needed to transform the data, based on our described distribution, into some other distribution p(x), which does two things, 1 it now gives us a more "true" distribution for how the actual process were trying to model, and 2 it gives a translated way to interpret that distribution, as it being related to the p(z) we defined thru the CDF transformations. But now I have another rabbit hole to go down, cause it seems like another method for a copula or something lol
Awesome video. The example you showed seems to match very closely with the types of problems that you would solve with a KDE. I think a lot of what people use normalizing flows for right now is learning an invertible mapping from a source distribution to a sink distribution. Could you shed some light on how you could use this normalizing flow model you built to generate new samples of data?
wow what a explanation and visualization.....simply outstanding !
I have to believe that a super complex version of this in 8 dimensions using spinor fields is what James Simons figured out way way back
Amazing man! Awesome!
Thank your Volt for this class.
But how do you inverse the neural network? like how do you get x from an output y using a combination of multiple cdfs
What a wonderful video!
2:20 Where is the uniform distribution?
this guy is genius
great video, Thanks a lot
Do you recommend PyTorch for probabilistic programming?
I tried Pyro a long time ago and found it pretty easy to use. Also starting to like JAX more and more recently.
tensorflow probabilities is pretty dope too tbh
That is a great tutorial. Do you have the code uploaded anywhere?
I also wonder how to generate samples x, from the uniform z
I have a potentially stupid question, but I'm just a drop out who's learning on their own trying to make it lol........ anyways.... would it be reasonable to think of a normalizing flow model similar to a stochastic optimal control problem, such as using the hamilton jacobi bellman equation... for say, defining a dynamic strategy in a financial market, and defining its behavior to be relative to the distributions the market creates as it plays out, and essentially learn the optimal policy such that as the distribution of the market evolves, our relative execution evolves similarly, such that the end result of our execution will resemble the p(z) we defined to be desirable, and we got there by transforming the markets dists p(x). Does that sound anything close to ballpark? Or would it actually be more like, setting up the model as a normalizing flows model, and using a hamilton jacobi bellman set up to optimize and train said model?
So i went back to the beginning and I'm gonna try this again.
So basically here is the idea.... we have some desired distribution that we want and/or we want to use, and that is p(z) ...... and then using the math, and using our data, we can run our data thru the math, and find those CDFs needed to transform the data, based on our described distribution, into some other distribution p(x), which does two things, 1 it now gives us a more "true" distribution for how the actual process were trying to model, and 2 it gives a translated way to interpret that distribution, as it being related to the p(z) we defined thru the CDF transformations.
But now I have another rabbit hole to go down, cause it seems like another method for a copula or something lol
Awesome video. The example you showed seems to match very closely with the types of problems that you would solve with a KDE. I think a lot of what people use normalizing flows for right now is learning an invertible mapping from a source distribution to a sink distribution. Could you shed some light on how you could use this normalizing flow model you built to generate new samples of data?
Well done.
Thank you
Excelllent
Classic
good description but the background sound is really distractive and annoying.
Please read out the equations and explain what they mean in plain English. And remove the distracting music.