I've never found a channel so helpful on youtube like yours thank you so much you are one of the very few people on internet helping me on my engineering journey !
I'd so glad it was helpful. It's a difficult concept, and it sounds like you've been finding it challenging. It's great to hear when my videos have helped.
When I took this class at UCSD in 2001 my instructor was a visiting prof who hardly spoke English (not necessarily a bad thing unless all your students speak it as their first language), and he chose a textbook with thousands of notation errors in it. Needless to say I didn't learn much. This makes much more sense. Thank you!
You've pretty much described my own undergraduate experience of this subject, back in 1990. It's one of the main motivators for me to make these videos! I'm glad you found this one helpful.
That’s actually a very bad thing for students. I actually shut down when I have teachers or professors who speak in a thick accent or in broken English. I am trilingual and still my mind shuts down when I don’t understand the language spoken in it’s common forms.
I'm glad you found it helpful. Good luck for your test. If you haven't seen it already, you'll find other related videos listed in categories at iaincollings.com
I'm glad you liked the video. Do you have suggestions of other topics? Have you seen my webpage? It lists all the videos I have, in categories: iaincollings.com
Beautiful explanation about (stochastic) random processes, thank you very much! Some questions regarding stationary and non-stationary: 1. You say that a process is stationary if all random variables have the same PDF, that means they are from the same family AND have the same parameters? 2. A process that is not stationary would be what? each random variable has the same PDF with different parameters AND/OR each variable can have a PDF of a different family (e.g. one a Gaussian and another a Rayleigh)? Last question (I promise!): A Gaussian process is when the PDF is a Gaussian right?
Some answers: 1. This video will help: "What is a Stationary Random Process?" ua-cam.com/video/elV4B_-79oo/v-deo.html 2. Yes, to both of your suggested examples. A general example is any system that changes parameters over time (for example, the maximum daily temperature - it's random, but the PDF changes continually throughout the year). 3. Yes.
Thank you Iain. By far the clearer explanation I've seen 😀 Beginner question : you have explained that when the PDFs of various sample functions at different times are equals, then there is stationarity. But how it is called when all sample functions have the same PDFs ?
Just to be a bit more precise, a sample function doesn't have a pdf. Sometimes people do say that, but it's not an accurate statement. A random variable has a pdf. A sample function can be used to generate a "normalised histogram" over time - and that can be compared to the pdf of the random variables, as I explain starting at the 5:40 min point in the video. If they are the same, then it is called "ergodic".
Thanks for the video. Are the sample functions (phone calls) starting at different times, or to be stationary should it start at same time and have same length?
The technical/mathematical definitions are for infinite time waveforms (so there is no "start" or "end"). But in practice we apply these mathematical definitions to finite time duration signals. Also, in practice of course, we only ever have one "realisation" of a signal (unless you believe in infinite "parallel universes").
There are lots of books that cover these topics. Probably the Schaum series have lots of worked questions (although I haven't used them myself). I like the book: Peyton Z. Peebles Jr, “Probability, Random Variables and Random Signal Principles”
What is unclear to me is that is the sample functions, the points of Sample as my textbook calls them, all possibilities of the same event, Jim calling me for example, or collection of actual events?
A "sample function" is a realisation of a Random Process. In the same way that a "sample" is a realisation of a Random Variable. For example, the outcome of rolling a 6-sided dice is a Random Variable. If you actually roll a dice, and look at the number on the side facing up, then that is a "sample" of that Random Variable.
If the PDFs are the same at all times, then the Random Variables at those times are "identically distributed" (i.d.). For i.i.d. the RVs need to be independent as well.
@@iain_explains Oh, then "stationary random process" means the PDFs are the same at all times, and that means "identically distributed"(i.d.). But it's not i.i.d. unless the all PDFs at all times are independent of each other. Did I understand correctly? Thank you!! :)👍
(Seems like UA-cam deleted my comments since I put some external links... So I rewrite without the links... c'mmon UA-cam..!) For future ref) Stationary RP has the same PDFs for all times(i.d.), but NOT vice versa(i.d. doesn't mean stationary RP). - For an example of i.d. that's not stationary RP, (example 1) see the comment of the question "Example of identically distributed process, not stationary" in StackExchange/Mathematics. (example 2) see kccu's answer for the question ["strong stationary process" equivalent to a process with "indentically distributed" random variables?] in StackExchange/Mathematics. - For an example of stationary RP that's not i.i.d.(only i.d.), see Joe Blitzstein's answer to this question: "If a stochastic process is strictly stationary, does it imply that the random variables in the process are independent and identically distributed?" in Quora.
I need some clarification here. If a process is stationary doesn't it mean it is ergodic? If at every instant it follows same distribution then the whole function values will also follow the same distribution right? please someone explain
Great question. It's going to be the topic of one of my upcoming videos. My best "counter example" is to consider the process of the height of planes as they fly over a town in windy turbulent conditions. As each plane flies overhead, its height will go up and down randomly depending on the wind. So the height pdf could be modelled as a Gaussian centred on the height that the flight control tower told them to fly at. But what if there were two allowable heights in the regulations? Then some planes would have a stationary Gaussian pdf centred on one height, and other planes would have a stationary Gaussian pdf centred on the other height. The overall pdf of the overall process of "planes that fly overhead" will be a _stationary_ pdf which is the addition of the two Gaussians. But if you selected a specific plane, and tracked its flight height, and generated a histogram from that time-domain realisation from that plane, then it would only show a single Gaussian. You couldn't learn the overall "two Gaussian" pdf, from any single realisation of the process. Therefore it is not _ergodic_ .
Iain thank you very much! A question: if a sample is stationary wouldn't the histogram always be the same as the PDF? And if so would what defines random process as Ergodic is that every sample is stationary and has the same PDF? I hope my question was clear. Thanks again!
The histogram of a sample realisation function/waveform is only the same as the pdf of the stationary random process from which it was sampled, if the process is ergodic. Consider this non-ergodic process: "the speed of runners in the London marathon". If you randomly pick a runner, and plot their speed as a function of time, then take the histogram of that plot (over time), it would not be the same as the overall pdf - because it would depend on which runner you picked. Slow and fast runners would have different histograms. The random process would be stationary (assuming the London marathon is a flat course, and average speeds don't change over time), but not ergodic.
If we know what type of distribution describes the behavior of a random variable, wouldn't that mean the process is not truly random? In other words, if a process that generates a random variable was truly 'random' then we could not know what probability distribition describes that variable... Maybe this is just semantics, but can someone explain why I am wrong?
So, a few things here. First of all, what do you mean by "truely random"? It's not a term that gets used by mathematicians, because it's a bit too vague. It sounds like you mean "every possible outcome has equal probability". We call that a "random variable with a uniform probability distribution". We use the term "random" for things where the outcome is unknown ahead of time. It's either "random" or it's not. There are no degrees of "randomness". ie. nothing is "truely random" - it's just "random". Second thing to say is that a "process" does not "generate a random variable". A process is made up of a time-sequence of random variables. Perhaps you are thinking about an actual "realisation" of a process. In other words a measured outcome/signal over time (for example the maximum temperature measurements in Sydney over the past 1000 days). But the problem is, time only goes forward. We can't repeat the "experiment" and try those 1000 days over again, to see whether those particular 1000 days are truely representative of Sydney's long term weather, and test if a different outcome would have happened if we'd had those 1000 days over again. Basically we need to accept that probability distributions are only models of what we believe reality to be. Now we're getting philosophical ... but what else can we do? Random variables are at the interface between "science" and "belief" - that's what makes them so interesting.
I have always heard of random variable for example no of heads when a coin tossed twice are 0,1,2 so random variable are 0,1,2 Here you say "a random variable" you are not saying random variables
0,1 and 2 are not random variables. They are not random, and they are not variables. They are numbers. They don't vary. 0 is always 0. 1 is always 1. 2 is always 2. A "variable" is something that can take on multiple values. So, in your example, you might decide to define the random variable "X" to be the addition of the outcomes of two coin tosses. Then X will have the value 0 if two tails are tossed, it will have the value 1 if one tail and 1 head is tossed, and it will have the value 2 if two heads are tossed. The value of X will vary, depending on the outcome of the experiment. Therefore it is a "variable". And since the outcomes are random, it is a "random variable".
Most clear explanation I have ever seen.Thank you so much.
Glad it was helpful!
Best explanation, I have gone through yet. In this little time, you explained such complex thing clearly. Thank you very much.
Great. I'm glad you found it helpful.
Boy I wish you would have been my instructor for this class. I would have actually learned something.
I guess it's never too late. 😁 I'm glad you've found the content now, and that you like it.
I've never found a channel so helpful on youtube like yours thank you so much you are one of the very few people on internet helping me on my engineering journey !
That's great to hear. I'm glad the videos are helpful.
Thank you sir. You cured my cancer. It is a Best explanation I have ever seen.
I'd so glad it was helpful. It's a difficult concept, and it sounds like you've been finding it challenging. It's great to hear when my videos have helped.
When I took this class at UCSD in 2001 my instructor was a visiting prof who hardly spoke English (not necessarily a bad thing unless all your students speak it as their first language), and he chose a textbook with thousands of notation errors in it. Needless to say I didn't learn much. This makes much more sense. Thank you!
You've pretty much described my own undergraduate experience of this subject, back in 1990. It's one of the main motivators for me to make these videos! I'm glad you found this one helpful.
That’s actually a very bad thing for students. I actually shut down when I have teachers or professors who speak in a thick accent or in broken English. I am trilingual and still my mind shuts down when I don’t understand the language spoken in it’s common forms.
Wonderful explanation, everything's so clear, I guess I'll benefit a lot from that for my statistics test coming up on Monday!
I'm glad you found it helpful. Good luck for your test. If you haven't seen it already, you'll find other related videos listed in categories at iaincollings.com
SIR, YOU SHOULD GET A NOBEL PRIZE FOR SHARING YOUR LIFE LEARNINGS IN THE MOST INFORMATIVE WAY (CONTENT) !
That'd be very nice! 😁
Thank you, Professor. The information was useful. Following you from Algeria🇩🇿🦋
Glad it was helpful! It's great to know I'm able to help people around the world.
excellent explanation , a kiss from me young man
truly a clear and good explanation with examples...thank you, Prof.Iain
Glad it was helpful!
Nice and neat explanation, thank you Sir
Glad you liked it
Thank you very much! I would appreciate so much if more content on this topic follows as video sources on this subject are quite limited!
I'm glad you liked the video. Do you have suggestions of other topics? Have you seen my webpage? It lists all the videos I have, in categories: iaincollings.com
i'm very impressed to your clear and kind explanation. i plan to suggest to my lab colleagues^^
Awesome, thank you!
What an amazing presentation!
Thanks. I'm glad you liked it!
Brilliant explanation
Glad you liked it
Beautiful explanation about (stochastic) random processes, thank you very much!
Some questions regarding stationary and non-stationary:
1. You say that a process is stationary if all random variables have the same PDF, that means they are from the same family AND have the same parameters?
2. A process that is not stationary would be what? each random variable has the same PDF with different parameters AND/OR each variable can have a PDF of a different family (e.g. one a Gaussian and another a Rayleigh)?
Last question (I promise!):
A Gaussian process is when the PDF is a Gaussian right?
Some answers:
1. This video will help: "What is a Stationary Random Process?" ua-cam.com/video/elV4B_-79oo/v-deo.html
2. Yes, to both of your suggested examples. A general example is any system that changes parameters over time (for example, the maximum daily temperature - it's random, but the PDF changes continually throughout the year).
3. Yes.
@@iain_explains
Perfect, thanks a lot!!
Excellent explanation.
Glad it was helpful!
Beautiful! Thanks.
Glad you like it!
Great job as always !
Thanks again!
So an ergodic process is the same as taking a sample or simulating the distribution?
Hopefully this helps: "What does Ergodic mean for Random Processes?" ua-cam.com/video/GchHaeVludo/v-deo.html
Thank you Iain. By far the clearer explanation I've seen 😀
Beginner question : you have explained that when the PDFs of various sample functions at different times are equals, then there is stationarity.
But how it is called when all sample functions have the same PDFs ?
Just to be a bit more precise, a sample function doesn't have a pdf. Sometimes people do say that, but it's not an accurate statement. A random variable has a pdf. A sample function can be used to generate a "normalised histogram" over time - and that can be compared to the pdf of the random variables, as I explain starting at the 5:40 min point in the video. If they are the same, then it is called "ergodic".
@@iain_explains is the 'histogram' related to the ensemble average?
great teaching.
Thanks. I'm glad you found it useful.
This video was excellent and very helpful
I'm so glad!
Thanks for the video. Are the sample functions (phone calls) starting at different times, or to be stationary should it start at same time and have same length?
The technical/mathematical definitions are for infinite time waveforms (so there is no "start" or "end"). But in practice we apply these mathematical definitions to finite time duration signals. Also, in practice of course, we only ever have one "realisation" of a signal (unless you believe in infinite "parallel universes").
Thank you it's a great explanation.can you tell me the book for practice problems on probability and random processes.
There are lots of books that cover these topics. Probably the Schaum series have lots of worked questions (although I haven't used them myself). I like the book: Peyton Z. Peebles Jr, “Probability, Random Variables and Random Signal Principles”
@@iain_explains Thank you for your reply😊
Many thanks for Great explanation
Glad it was helpful!
thank you so much
You're welcome!
What is unclear to me is that is the sample functions, the points of Sample as my textbook calls them, all possibilities of the same event, Jim calling me for example, or collection of actual events?
A "sample function" is a realisation of a Random Process. In the same way that a "sample" is a realisation of a Random Variable. For example, the outcome of rolling a 6-sided dice is a Random Variable. If you actually roll a dice, and look at the number on the side facing up, then that is a "sample" of that Random Variable.
Thank you sir🙏
Thanks for sharing this amazing explanation!
Glad it was helpful!
The best, thank you
Glad you like it!
Thank you sir for clear explanation
You are welcome
Thanks for the video! If the PDFs are the same across all different times, isn't it also i.i.d.?
*(Future reference added in the reply.)*
If the PDFs are the same at all times, then the Random Variables at those times are "identically distributed" (i.d.). For i.i.d. the RVs need to be independent as well.
@@iain_explains Oh, then "stationary random process" means the PDFs are the same at all times, and that means "identically distributed"(i.d.). But it's not i.i.d. unless the all PDFs at all times are independent of each other. Did I understand correctly? Thank you!! :)👍
(Seems like UA-cam deleted my comments since I put some external links... So I rewrite without the links... c'mmon UA-cam..!)
For future ref) Stationary RP has the same PDFs for all times(i.d.), but NOT vice versa(i.d. doesn't mean stationary RP).
- For an example of i.d. that's not stationary RP,
(example 1) see the comment of the question "Example of identically distributed process, not stationary" in StackExchange/Mathematics.
(example 2) see kccu's answer for the question ["strong stationary process" equivalent to a process with "indentically distributed" random variables?] in StackExchange/Mathematics.
- For an example of stationary RP that's not i.i.d.(only i.d.), see Joe Blitzstein's answer to this question: "If a stochastic process is strictly stationary, does it imply that the random variables in the process are independent and identically distributed?" in Quora.
I need some clarification here. If a process is stationary doesn't it mean it is ergodic? If at every instant it follows same distribution then the whole function values will also follow the same distribution right? please someone explain
Great question. It's going to be the topic of one of my upcoming videos. My best "counter example" is to consider the process of the height of planes as they fly over a town in windy turbulent conditions. As each plane flies overhead, its height will go up and down randomly depending on the wind. So the height pdf could be modelled as a Gaussian centred on the height that the flight control tower told them to fly at. But what if there were two allowable heights in the regulations? Then some planes would have a stationary Gaussian pdf centred on one height, and other planes would have a stationary Gaussian pdf centred on the other height. The overall pdf of the overall process of "planes that fly overhead" will be a _stationary_ pdf which is the addition of the two Gaussians. But if you selected a specific plane, and tracked its flight height, and generated a histogram from that time-domain realisation from that plane, then it would only show a single Gaussian. You couldn't learn the overall "two Gaussian" pdf, from any single realisation of the process. Therefore it is not _ergodic_ .
Very clear explanation! Thank you very much.
Glad it was helpful!
Great explanation! Do you know a good book of exercises/problems about stochastic processes with the solution?
This is great!
Glad yo liked it.
very helpful thank you
You're welcome!
God bless you sir! Please keep up the amazing work!
Thank you, I plan to.
Iain thank you very much! A question: if a sample is stationary wouldn't the histogram always be the same as the PDF? And if so would what defines random process as Ergodic is that every sample is stationary and has the same PDF? I hope my question was clear. Thanks again!
The histogram of a sample realisation function/waveform is only the same as the pdf of the stationary random process from which it was sampled, if the process is ergodic. Consider this non-ergodic process: "the speed of runners in the London marathon". If you randomly pick a runner, and plot their speed as a function of time, then take the histogram of that plot (over time), it would not be the same as the overall pdf - because it would depend on which runner you picked. Slow and fast runners would have different histograms. The random process would be stationary (assuming the London marathon is a flat course, and average speeds don't change over time), but not ergodic.
@@iain_explains Thank you!
Very clear. Thank you!
Glad it was helpful!
Thanks Pro ❤
You're welcome 😊
Excellent video, thank you.
Glad you liked it!
Excellent, thanks a lot!
Glad it helped!
Thank u so much ! Nice video !!!
How did you know that X(1) follows Rayleigh distribution ?
Hopefully this video will help: "What is Rayleigh Fading?" ua-cam.com/video/-FOnYBZ7ZfQ/v-deo.html
@@iain_explains Thanks a lot !!!!!!
Limpid explanation, thanks !
Any book of reference to suggest ?
Thank you for this the video
Glad it was helpful!
If we know what type of distribution describes the behavior of a random variable, wouldn't that mean the process is not truly random? In other words, if a process that generates a random variable was truly 'random' then we could not know what probability distribition describes that variable... Maybe this is just semantics, but can someone explain why I am wrong?
So, a few things here. First of all, what do you mean by "truely random"? It's not a term that gets used by mathematicians, because it's a bit too vague. It sounds like you mean "every possible outcome has equal probability". We call that a "random variable with a uniform probability distribution". We use the term "random" for things where the outcome is unknown ahead of time. It's either "random" or it's not. There are no degrees of "randomness". ie. nothing is "truely random" - it's just "random". Second thing to say is that a "process" does not "generate a random variable". A process is made up of a time-sequence of random variables. Perhaps you are thinking about an actual "realisation" of a process. In other words a measured outcome/signal over time (for example the maximum temperature measurements in Sydney over the past 1000 days). But the problem is, time only goes forward. We can't repeat the "experiment" and try those 1000 days over again, to see whether those particular 1000 days are truely representative of Sydney's long term weather, and test if a different outcome would have happened if we'd had those 1000 days over again. Basically we need to accept that probability distributions are only models of what we believe reality to be. Now we're getting philosophical ... but what else can we do? Random variables are at the interface between "science" and "belief" - that's what makes them so interesting.
This video might help: "What is a Random Variable?" ua-cam.com/video/MM6QM3y8pvI/v-deo.html
Is random process the same meaning as stochastic process?
Yes. I probably should have mentioned that in the video.
Thank you 🇩🇿💛🧠
You're welcome.
King
I have always heard of random variable for example no of heads when a coin tossed twice are 0,1,2 so random variable are 0,1,2
Here you say "a random variable" you are not saying random variables
0,1 and 2 are not random variables. They are not random, and they are not variables. They are numbers. They don't vary. 0 is always 0. 1 is always 1. 2 is always 2. A "variable" is something that can take on multiple values.
So, in your example, you might decide to define the random variable "X" to be the addition of the outcomes of two coin tosses. Then X will have the value 0 if two tails are tossed, it will have the value 1 if one tail and 1 head is tossed, and it will have the value 2 if two heads are tossed. The value of X will vary, depending on the outcome of the experiment. Therefore it is a "variable". And since the outcomes are random, it is a "random variable".
See this video for more explanation: "What is a Random Variable?" ua-cam.com/video/MM6QM3y8pvI/v-deo.html
@@iain_explains thanks
@@iain_explains I want to know about continous random variable
Thank you 🇩🇿💛🧠
You're welcome.
Thank you 🇩🇿💛🧠
You're welcome.