I think what was most interesting for me was where you decided the model was as good as it was going to get and how you went about making that determination. With a fit percentage in that range I would have ended up just trying every combination of model order and zeros and assumed I wasn’t getting something right or that the model wouldn’t be usable.
Big fan. Brian taught me controls, Matlab helped me understand them. Although Matlab rejected me during my interview, I'm still working at an OEM today using MATLAB products
Where is the link that desciribes in detail the whiteness of the prediction residuals and the correlation between those residuals and the input into the system? Can you tell me please?
Hi Brian! Thanks for this video. Unfortunately the link to Resourcium doen't work. "Page not found" appears on Resourcium. I'm really interested on the code you give in the examples. Would it be possible that you share that document?
Hello, go to your MATLAB and type in the command line: >> doc linearRegressor Inside of it you will find in the section "Examples" the Open Live Script for all the examples of this video
instead of applying the whole input sequence to the model and compare the model output to the real test results, you can choose a time instant from the data, then initialize the model using the real test result corresponding to t1 and apply the corresponding input at t1 to the model, get the output at t2 and compare it to the test next output at t2 and so on...
Great video -- however for the next one as you get into online estimation using recursive least square estimation, can you go over an example where estimation starts from a modeled mathematical plant and goes from there... as oppose to doing the online estimation from a totally unknown model where parameters could divert a lot.
What I have realized is that complete black box modeling is almost always a bad idea. We should try to incorporate as much information as we have of our system and then take the grey-box approach.
Thank you, very well explained and a nice example. There is just one thing going around my head left. Who does the estimation of the disturbance model work? Is there an official side explaining this? I mean after all the problem with the standard estimation methods, e.g. via Least Squares, is that you don't have the white noise input right? So how do you find the optimal values for the disturbance model?
Great question! First of all, there are various models that you can try like random white noise, random Gaussian noise, noise at some particular frequency like 60 Hz from household power supply in case of power systems, etc. Also, while the video only talked about process noise, we also have measurement noise because of non-ideal sensors. So by implementing some sort of estimators like complimentary filter, moving average, or Kalman filter (for linear stochastic system; EKF/UKF/PF for nonlinear systems), we can get filtered output, and then we can focus on capturing the key dynamics of the actual plant with process disturbances. Let me know what you think.
Can there be a scenario where the validation fit of model with disturbance is lower than that of model without any disturbance component in spite of high autocorrelation among residuals? In my data sysTF model (first order TF with one pole, no zeros and finite dead time) has better performance in both estimation and validation datasets compared to a first order process model with disturbance fit to an ARMA1 model, yet sysTF has high autocorrelation of residuals. Interestingly, fitting a second order disturbance model ARMA2 seems to improve fit in validation dataset
What I don't understand is he also found a disturbance path, but when he tested, he did not give any gauss. white noise as an input to the disturbance path. Am I missing something?
The 'sysP1D' model that he derived accounting for the disturbance contains the information that the output will be corrupted with the process noise, better estimated with the given ARMA1 model. So he doesn't need to explicitly apply the Gaussian random noise. Look closely at the MATLAB output after the sysP1D estimation.
hi, i'm not an expert, i'm trying to replicate what you did and i think i found an error: at 15:50 you wrote sysInit = idproc('P1D','TimeUnit','seconds'); i'm pretty sure that it should be sysInit = idproc('P1D','TimeUnit','minutes');
I think what was most interesting for me was where you decided the model was as good as it was going to get and how you went about making that determination. With a fit percentage in that range I would have ended up just trying every combination of model order and zeros and assumed I wasn’t getting something right or that the model wouldn’t be usable.
Big fan. Brian taught me controls, Matlab helped me understand them. Although Matlab rejected me during my interview, I'm still working at an OEM today using MATLAB products
You make me regret that I have changed System Identification course with another one. System identification is interesting.
Where were you when I was in uni 😢
Better late than never? ☺
amazing works!
Interesting, though not easy material. Unfortunately the first link provided (to Resourcium) does not work.
Where is the link that desciribes in detail the whiteness of the prediction residuals and the correlation between those residuals and the input into the system? Can you tell me please?
Well explained. Where are the matlab codes/ scripts used in this video?
Hi Brian! Thanks for this video. Unfortunately the link to Resourcium doen't work. "Page not found" appears on Resourcium. I'm really interested on the code you give in the examples. Would it be possible that you share that document?
Hello, go to your MATLAB and type in the command line:
>> doc linearRegressor
Inside of it you will find in the section "Examples" the Open Live Script for all the examples of this video
Great video! I am confused as to how the one-step-predicted output is calculated?
instead of applying the whole input sequence to the model and compare the model output to the real test results, you can choose a time instant from the data, then initialize the model using the real test result corresponding to t1 and apply the corresponding input at t1 to the model, get the output at t2 and compare it to the test next output at t2 and so on...
the resourcium link is for the 1st video in the series
Great video -- however for the next one as you get into online estimation using recursive least square estimation, can you go over an example where estimation starts from a modeled mathematical plant and goes from there... as oppose to doing the online estimation from a totally unknown model where parameters could divert a lot.
What I have realized is that complete black box modeling is almost always a bad idea. We should try to incorporate as much information as we have of our system and then take the grey-box approach.
Brian and Matlab my worlds collide now
Can anyone help me out in getting the dataset used in the video??
Is there an official playlist for these system identification videos?
Thank you, very well explained and a nice example. There is just one thing going around my head left. Who does the estimation of the disturbance model work? Is there an official side explaining this? I mean after all the problem with the standard estimation methods, e.g. via Least Squares, is that you don't have the white noise input right? So how do you find the optimal values for the disturbance model?
Great question! First of all, there are various models that you can try like random white noise, random Gaussian noise, noise at some particular frequency like 60 Hz from household power supply in case of power systems, etc. Also, while the video only talked about process noise, we also have measurement noise because of non-ideal sensors. So by implementing some sort of estimators like complimentary filter, moving average, or Kalman filter (for linear stochastic system; EKF/UKF/PF for nonlinear systems), we can get filtered output, and then we can focus on capturing the key dynamics of the actual plant with process disturbances. Let me know what you think.
Can there be a scenario where the validation fit of model with disturbance is lower than that of model without any disturbance component in spite of high autocorrelation among residuals? In my data sysTF model (first order TF with one pole, no zeros and finite dead time) has better performance in both estimation and validation datasets compared to a first order process model with disturbance fit to an ARMA1 model, yet sysTF has high autocorrelation of residuals. Interestingly, fitting a second order disturbance model ARMA2 seems to improve fit in validation dataset
What I don't understand is he also found a disturbance path, but when he tested, he did not give any gauss. white noise as an input to the disturbance path. Am I missing something?
The 'sysP1D' model that he derived accounting for the disturbance contains the information that the output will be corrupted with the process noise, better estimated with the given ARMA1 model. So he doesn't need to explicitly apply the Gaussian random noise. Look closely at the MATLAB output after the sysP1D estimation.
when will part 3 release?
hi, i'm not an expert, i'm trying to replicate what you did and i think i found an error: at 15:50 you wrote sysInit = idproc('P1D','TimeUnit','seconds'); i'm pretty sure that it should be sysInit = idproc('P1D','TimeUnit','minutes');
thank
perfect