You are correct that they are initial guesses. Sometimes the solver doesn't converge if the initial guesses are very poor. Another reason they may converge to different values is because there are multiple local minima. A sum of squared error problem with a linear model is always convex (one local minimum). Once you introduce other nonlinear terms, you may be introducing these non-convex terms.
I am trying to solve a similar problem in my work (apply a curve fit model to compute the optimised parameter values), but I am using parameters taken from an Excel file. I'm wanting to adapt your suggestion (which is very useful, by the way) to my work, but I receive an error. temp = df2['Temperature'].values growth = df2['Growth rate'].values def boatman(self, temp, max_growth_rate, min_temp_limit, max_temp_limit, skewness, kurtosis): return max_temp_limit * (np.sin(np.pi*((temp - min_temp_limit)/(max_temp_limit- min_temp_limit))**skewness)**kurtosis) params = [1, 14, 34, 1, 0.5] c, cov = curve_fit(boatman, temp, growth, params) print(c) TypeError: boatman() missing 1 required positional argument: 'kurtosis' Is there any way I can make this work?
Hi, maybe this is a basic question but it is one I have struggled finding the answer. I have a random set of values and calculate the fitting parameters mu and std using scipy.stats.norm.fit. I wanted to see what was the difference if I would normally calculate the mean and standard deviation using numpy. To my surprise every time I plotted both pdf and printed the means and std I got from both methods I always get the same exact values so what is the point of norm.fit? Maybe there is something really obvious Im not seeing.
The result of scipy.stats.norm.fit(data) is to calculate the mean and stdev. You should get the same answer using numpy.mean(data) and numpy.std(data).
For this problem there is no advantage - it is to show a comparison between the two. For problems with large amounts of data a programming language like Python will be much more capable. The strength of Python is not just an individual capability but that it is combined with many other packages that allow a holistic solution to complex data science and analysis problems.
One other thing to consider is that Excel and Python can be used together. Although the two can interface with Pandas and xlwings, some are suggesting that Microsoft may integrate Python more fully as a backend programming environment for Excel, similar to what VBA currently does.
dear professor can you know any kalman filter library which give estimation result after putting A matrix=[[1,0,dt,0],[0,1,0,dt],[0,0,1,0],[0,0,0,1]], and same way control matrix input matrix estimation process noise , measurement noise and after measurement [x,y] and estimation the position
Here is an example Kalman filter script: scipy-cookbook.readthedocs.io/items/KalmanFiltering.html - I don't know of one that is specific to your application.
sir professor one of the libarary of python in pykalman which written by daniel duckworth sir on their what is transition_offsets , and observation_offsets
Awesome video. Very clear and to the point. Thanks!
Why the values for p0 are [100, 0.01, 100, 0.01] and not something else? Also why affect the fitted curve that much if they are initial guesses?
You are correct that they are initial guesses. Sometimes the solver doesn't converge if the initial guesses are very poor. Another reason they may converge to different values is because there are multiple local minima. A sum of squared error problem with a linear model is always convex (one local minimum). Once you introduce other nonlinear terms, you may be introducing these non-convex terms.
I am trying to solve a similar problem in my work (apply a curve fit model to compute the optimised parameter values), but I am using parameters taken from an Excel file. I'm wanting to adapt your suggestion (which is very useful, by the way) to my work, but I receive an error.
temp = df2['Temperature'].values
growth = df2['Growth rate'].values
def boatman(self, temp, max_growth_rate, min_temp_limit, max_temp_limit, skewness, kurtosis):
return max_temp_limit * (np.sin(np.pi*((temp - min_temp_limit)/(max_temp_limit- min_temp_limit))**skewness)**kurtosis)
params = [1, 14, 34, 1, 0.5]
c, cov = curve_fit(boatman, temp, growth, params)
print(c)
TypeError: boatman() missing 1 required positional argument: 'kurtosis'
Is there any way I can make this work?
Here is an example that may help: docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html
scipy.stats.pearsonr(x, y) also available for R^2
Thanks for the tip!
Hi, maybe this is a basic question but it is one I have struggled finding the answer. I have a random set of values and calculate the fitting parameters mu and std using scipy.stats.norm.fit. I wanted to see what was the difference if I would normally calculate the mean and standard deviation using numpy. To my surprise every time I plotted both pdf and printed the means and std I got from both methods I always get the same exact values so what is the point of norm.fit?
Maybe there is something really obvious Im not seeing.
The result of scipy.stats.norm.fit(data) is to calculate the mean and stdev. You should get the same answer using numpy.mean(data) and numpy.std(data).
Since these functions are already available in xl, what is the advantage of using python?
For this problem there is no advantage - it is to show a comparison between the two. For problems with large amounts of data a programming language like Python will be much more capable. The strength of Python is not just an individual capability but that it is combined with many other packages that allow a holistic solution to complex data science and analysis problems.
One other thing to consider is that Excel and Python can be used together. Although the two can interface with Pandas and xlwings, some are suggesting that Microsoft may integrate Python more fully as a backend programming environment for Excel, similar to what VBA currently does.
dear professor can you know any kalman filter library which give estimation result after putting A matrix=[[1,0,dt,0],[0,1,0,dt],[0,0,1,0],[0,0,0,1]], and same way control matrix input matrix estimation process noise , measurement noise and after measurement [x,y] and estimation the position
Here is an example Kalman filter script: scipy-cookbook.readthedocs.io/items/KalmanFiltering.html - I don't know of one that is specific to your application.
sir professor one of the libarary of python in pykalman which written by daniel duckworth sir on their what is transition_offsets , and observation_offsets
Sorry, I haven't used that library. I'd recommend searching for documentation or examples and then ask the developers if you can't find what you need.
ok sir thanks
thanks, helped a lot!
helo sir!i am learning a lot from your channel ragarding matlab.plz makes some videos on creating GUI in matlab.
hi! how do we add error if we have 0.5 error for each y data points? thankyou!
If the data points are wrong always by 0.5 then your could add or remove that amount before performing the curve fit.
thankss!