AT APPROX 31:00 - If ISLAND is not showing I just increased my test_size = 0.2 to 0.25, or until it became large enough that it did include the ISLAND. Not sure of a real fix but this worked to get past this hurdle. Take care
a small summary : for those who are gonna start , he preprocessed the dataset a bit ( removing NaN values, adding features and splitting the catogerical value column to binary columns ) and then scaled,splitted and trained & tested on linear , random forest ..finding best estimator at last ( no explaination on what estimators are, so read forest ahead of doing this )
If you could brief explain what linear regression did ? Were all the variable taken into account and develop a slop to predict the value based on existing data? What if we removed some negatively correlated data and the response? I fail to understand what we did apart from cool images, if you can make a brief lectures on regression random decision tree cluster with some situation analysis- it would help us Thanks
Just found your channel! Im on a journey to become a data scientist and really build a solid understanding. This is a great first project to get under my belt. Having you by my side while going through the steps is awesome. I will try out doing projects all by myself also but first following along is a great start to get more comfortable and see the steps included and how u tackle it! Greetings from Sweden!
@@thinhtruong9405 I would highly recommend you to watch the video until end, search for the concepts and try to write the code yourself. That's how you can fully take benefit of this content.
@@softwareengineer8923 i see, but i have a problem, i want this code to do something, if you have please give it to me, sry im from vietnam so my English is so bad
Hi. What I would recommend doing in the hyperparameter tunning phase on the RFR model. Is to use np.range() instead of a list with hard values the model has to use and which are limited to two options or three. Yes this might take a lot of time to run but using randomizedsearchCV would be okay as a starter then if you see the model improving you can use gridsearchcv instead.
Great video, thank's a lot. But I'm missing the most interesting part: How can I use the model for getting the house value for an object which isn't part of the used data?
I have two questions 1. Why didnt you use all feature in train_data (many columns were skewed) to convert via log 2. I didnt saw any change in histogram before and after . How did you decided that data is converted to normal distribution?
The good: feature engineering, I liked the one hot encoding explanation, and how easy you made it look. The bad: extremely superficial explanations. E.g., min 29, “we get a score of 66, which is not too bad, but also not too good” great, thanks for the in-depth explanation as to what 66 means and how to interpret. Most of these “tutorials” are just people recording themselves writing code, like it´s a big deal. The real important piece is understanding the business problem, and interpreting results in terms everyone can understand; I can copy/paste code from a hundred different websites. Also, linear regression is not about getting a 66 or whatever score, it´s about predicting a value, in this case, house prices; how is “66” relevant to that goal?? The ugly: speak way too fast for no reason at all. You´re making a tutorial, not speed racing. Thanks anyway.
ya think? I should have cut my losses when you made the test/train split that early, .at around 28:00 the instructions became to confused to be useful. Until then, thanks for the instructions.
Great content, but as a Newley founded developer interested in ML I do wish you went into a bit more detial on the key features being leveraged in the walkthrough. I would not mind spending an hour or so more to fully understand the methods and functions your leveraging in this demo. All in all thank you for your hard work and dedication in sharing what I believe to be humans biggest development since the Industrial Revolution. Keep on Techin sir.
Continuity issue apparently: did you drop the ocean_proximity column before you ran the correlation matrix? My train_data.corr() fails due to values like '
plt.figure(figsize=(15,8)) sns.heatmap(train_data.loc[:, train_data.columns!='ocean_proximity'].corr(), annot=True, cmap="YlGnBu") I used this code to ignore the column. Hopefully this will help you get through it.
You don't need to normalize data when dealing with linear regression, that's the main advantage of this method, it is based on coefficients, and those coeficients adjust to the order of magnitude of each variable !
i hade the same issue and i resolve it by dropping the colume # visualize a correlation matrix with the target variable # dropping the "ocean_proximity" because its not numerical data_without_OP = train_data.drop(['ocean_proximity'], axis=1) plt.figure(figsize=(15, 8)) # Ajusta el tamaño de la figura si es necesario sns.heatmap(data_without_OP.corr(), annot=True, cmap="YlGnBu") plt.show() ------- after that maybe you will faceeing a problem that the heatmap dosen show all the numbers its a problem of matplotlib version u using save ur notebook and close it then create a new blank notebook and run this code: !pip install matplotlib==3.7.3 if u run it in your project it will note allow u and u r notebook will freeze bcz u using it
Thank you much for the detailed video , everything was explained very feel , i would suggest this could be the best video to start with the machine learning projects as a beginner. And personally this video helped me a lot as i am taking up my first ML project..
hey, broh where is the datase of california house price, i didn't get yet here or in your githab. or you haven't share with us alhough you said the link of the dataset is on the description.
@@gongxunliu5237 wow I rewatched the video 10 times to understand how he was able to get past that error and am still lost... I ended up converting the ocean proximity column into an id column prior to running the model... did corr() used to automatically filter out the string columns or something in the past?
@@skripandthes You are making changes into the dataframe you can't reverse unless you restart the whole runtime on your workspace. Like jupyter notebook.
@@olanrewajuatanda533 Just define a new dataframe. Instead of doing this: Df.dropna(col, axis=1, inplace=true) Do this: Df = Df.dropna(col, axis=1) This way you don't hard code new changes to the dataframe and you can just edit the cell and run it again to correct any mistakes.
So how do you find the working details of the model? It's great to know the 'score' is 0.8 or whatever but what parameters are used to get that 0.8? In other words, I train a model with a score of 0.8 then get some new data points (lat, long, #bedrooms, total_bedrooms, etc (all except house price)) What's the equation I use to generate an expected house value and where do I get it? Great video though.
@@Ailearning879 but can you please help me where to test the model which is trained? since we only got the model's accuracy or score. And I'm a beginner in ML
11:50 I got an error using corr() because of non-numeric column 'ocean_proximity'. How did you do it? Did you change the code of pandas? Edit: I found it myself. Go to python installation path/libraries/pandas/core/frame.py Go to corr function definition and set numeric_only: bool = True.
Naive bayes, Gaussian naive bayes, KNN, Decision tree(Randomforest is collection of decision trees), gradient boosting and XGBoost. Try every one of them with different different parameters for each and select the best one with best set of parameters
I found that I could increase the test_size from 0.2 to 0.25 or until it became large enough that it included the island by change. Not a real fix but works for this. Take care
11:45 use the test_data.corr(numeric_only=True) instead as this will return an error if you do so. I do not understand how did you not get an error? I got this and had to apply the function above to solve it " ValueError: could not convert string to float: 'NEAR OCEAN'"
At minute 28:40 line "31" I typed the same "reg.score(X_test, y_test)" but it does'nt work. The ValueError is "Input X contains NaN." What I did wrong? Can anyone help me? I would like to complete this project. Thank you
Every thing was great but the fact that ive to debugg my entire code because we split earlier and had to pre process the test data again was so painfull speacially in jupyter lab
Thanks for the vid! First day on ur chanel really happy found u! And it seems you use a sort of autocompite for typing when on terminal? or ur typing is just soo fast..
Randomforest algo takes features at random so if we literally change nothing and fit the model again and again we can see the scores changing(+-2%). Also only one variable median income was strongly related with target(bcoz it had correlation>0.5). If many variables would have been above 0.5 then we might had seen drastic changes during gridsearch min_features
where can i get the notebook? i tried searching your gihub repository but dont see any related to house price prediction. Can you please share the notebook?
at minute 16:53 I am facing this issue were it suppose to provide the output with binary values instead it is displaying bool values is there anyway I can covert the values from boolean to binary?
Hey bro! Can you please guide me in number prediction in a specific position by reading existing excel data!? I wanted to generate 6 numbers with this logic
Is it just me who's getting the error "Input contains NaN, infinity or a value too large for dtype('float64')"? For both linear as well as random forest
Regression IS machine learning. When you predict categories or classes it is called classification. When you predict numeric values, we call this regression. Even if you use complex neural networks it is still regression. But not necessarily linear regression, which might be what you are thinking about. Random Forests are also non-linear.
As of this writing, I am not able to find the exact data set (.csv file ) for Californian house prices. If some one can provide me with the link for the same one used in this video this will be greatly appreciated!
2330 """ ValueError: columns overlap but no suffix specified: Index(['longitude', 'latitude', 'housing_median_age', 'total_rooms', 'total_bedrooms', 'population', 'households', 'median_income', 'ocean_proximity'], dtype='object') i got this error when i tried to join the train data ,that goses like this ( train_data= x_train.join(y_train)). now how do i solve this.
There should be no overlap, your X data are your 'features' - the attributes that your model uses to make a prediction of y 'labels'. In this scenario, the features are things like long, lat, bedrooms, population etc.. the label is the median house price because that is the value you want to predict. You have to drop the median house prices column from the X data frame and assign that column to the y variable. Then once you join X and y, you shouldn't have any overlaps
plt.figure(figsize=(15,8)) sns.scatterplot(x='latitude', y='longitude', df = train_df, hue='median_house_value', palette='coolwarm') this line of code is note working. its showing ValueError: Could not interpret value `latitude` for parameter `x` how can i fix this?
i got a value error when I used .corr() on my train data. something along the lines of not being able to convert the str into int. so I am unable to make a heat map. I am an absolute beginner so can someone please help me out. anything will be well appreciated
AT APPROX 31:00 - If ISLAND is not showing I just increased my test_size = 0.2 to 0.25, or until it became large enough that it did include the ISLAND. Not sure of a real fix but this worked to get past this hurdle. Take care
11:47 train_data.corr(numeric_only=True)
Thanks
this was really helpful
thanks
This saved me, thanks
bruh
a small summary : for those who are gonna start , he preprocessed the dataset a bit ( removing NaN values, adding features and splitting the catogerical value column to binary columns ) and then scaled,splitted and trained & tested on linear , random forest ..finding best estimator at last ( no explaination on what estimators are, so read forest ahead of doing this )
how did he change ocean proximity from object to int?
@@mbulelondlovu9427 he took one feature like
Mate you explain everything so concisely and keep it so interesting! Really enjoyed this video
I agree with you
If you could brief explain what linear regression did ? Were all the variable taken into account and develop a slop to predict the value based on existing data? What if we removed some negatively correlated data and the response? I fail to understand what we did apart from cool images, if you can make a brief lectures on regression random decision tree cluster with some situation analysis- it would help us Thanks
Just found your channel! Im on a journey to become a data scientist and really build a solid understanding. This is a great first project to get under my belt. Having you by my side while going through the steps is awesome. I will try out doing projects all by myself also but first following along is a great start to get more comfortable and see the steps included and how u tackle it! Greetings from Sweden!
One of the best machine learning tutorials on UA-cam, thanks a a lot for lucid and well detailed explanation.
hi, do you have this code, can you give it to me ?
@@thinhtruong9405 I would highly recommend you to watch the video until end, search for the concepts and try to write the code yourself. That's how you can fully take benefit of this content.
@@softwareengineer8923 i see, but i have a problem so if you have this code pls give it to me :((, im from viet nam, my english is so bad
@@softwareengineer8923 i see, but i have a problem, i want this code to do something, if you have please give it to me, sry im from vietnam so my English is so bad
@@softwareengineer8923 i see, i have a problem so i need this code to do something, im from viet nam so my endlish is so bad :((
Hi. What I would recommend doing in the hyperparameter tunning phase on the RFR model. Is to use np.range() instead of a list with hard values the model has to use and which are limited to two options or three.
Yes this might take a lot of time to run but using randomizedsearchCV would be okay as a starter then if you see the model improving you can use gridsearchcv instead.
your tutorials are the best thing i found on the internet
Great video, thank's a lot. But I'm missing the most interesting part: How can I use the model for getting the house value for an object which isn't part of the used data?
did u discover that?
u can create FCT with a model and X as an argument and then u can predict every value u want
@@techsnail8581 dattebayo
I have two questions
1. Why didnt you use all feature in train_data (many columns were skewed) to convert via log
2. I didnt saw any change in histogram before and after . How did you decided that data is converted to normal distribution?
the bars should fit in normal distribution curve which generally would be in middle
16:48, pd.get_dummies(data['ocean_proximity'], dtype=int)
explained better than my instructor xD thanks man
The good: feature engineering, I liked the one hot encoding explanation, and how easy you made it look.
The bad: extremely superficial explanations. E.g., min 29, “we get a score of 66, which is not too bad, but also not too good” great, thanks for the in-depth explanation as to what 66 means and how to interpret. Most of these “tutorials” are just people recording themselves writing code, like it´s a big deal. The real important piece is understanding the business problem, and interpreting results in terms everyone can understand; I can copy/paste code from a hundred different websites. Also, linear regression is not about getting a 66 or whatever score, it´s about predicting a value, in this case, house prices; how is “66” relevant to that goal??
The ugly: speak way too fast for no reason at all. You´re making a tutorial, not speed racing.
Thanks anyway.
Agreed :)
ya think?
I should have cut my losses when you made the test/train split that early, .at around 28:00 the instructions became to confused to be useful. Until then, thanks for the instructions.
Exactly lmao, i for the life of me could not understand why he would not completely preprocess the data first and then split the data
Guys please how was he able to copy and paste so fast @26:01min... Where he was trying to change train data to test data..?
Oh my!! Just amazing!! Make more such videos. Thank you so much.
Great content, but as a Newley founded developer interested in ML I do wish you went into a bit more detial on the key features being leveraged in the walkthrough. I would not mind spending an hour or so more to fully understand the methods and functions your leveraging in this demo.
All in all thank you for your hard work and dedication in sharing what I believe to be humans biggest development since the Industrial Revolution.
Keep on Techin sir.
Continuity issue apparently: did you drop the ocean_proximity column before you ran the correlation matrix? My train_data.corr() fails due to values like '
plt.figure(figsize=(15,8))
sns.heatmap(train_data.loc[:, train_data.columns!='ocean_proximity'].corr(), annot=True, cmap="YlGnBu")
I used this code to ignore the column. Hopefully this will help you get through it.
@@MatthewXiong-gk8nz thanks so much buddy
Am impressed,your explanation is so smooth and i can keep tyrack and understand every step or code you input💯
You don't need to normalize data when dealing with linear regression, that's the main advantage of this method, it is based on coefficients, and those coeficients adjust to the order of magnitude of each variable !
Best tutorial I've seen.
Heatmap cannot be render while there are non-numerical values (ocean_proximity) in the train data
I have experienced the same issue - how did the author manage to render a heatmap without dropping this column?
Try sns.heatmap(train_data.corr(numeric_only = True), annot=True, cmap= "YlGnBu")
i hade the same issue and i resolve it by dropping the colume
# visualize a correlation matrix with the target variable
# dropping the "ocean_proximity" because its not numerical
data_without_OP = train_data.drop(['ocean_proximity'], axis=1)
plt.figure(figsize=(15, 8)) # Ajusta el tamaño de la figura si es necesario
sns.heatmap(data_without_OP.corr(), annot=True, cmap="YlGnBu")
plt.show()
-------
after that maybe you will faceeing a problem that the heatmap dosen show all the numbers its a problem of matplotlib version u using
save ur notebook and close it then create a new blank notebook and run this code:
!pip install matplotlib==3.7.3
if u run it in your project it will note allow u and u r notebook will freeze bcz u using it
Thank you much for the detailed video , everything was explained very feel , i would suggest this could be the best video to start with the machine learning projects as a beginner. And personally this video helped me a lot as i am taking up my first ML project..
Keep it up bro! Pls do more videos with predictions
boss so appreciated I can't even express it
7:27 wouldn't you rather use data.isna().sum()? If you have a missing value in the whole row you might not catch that.
isnull().sum()?
Amazing work man
This was a great video. Just discovered your channel today. Definitely going to subscribe!
How this channel doesn't get 1M yet !!
hey, broh where is the datase of california house price, i didn't get yet here or in your githab.
or you haven't share with us alhough you said the link of the dataset is on the description.
when I ran x_test_s, I got: could not convert string to float: 'INLAND'. how to solve it?
same here
I wouldn't waste your time. This code doesn't work and he races through everything. Much better tutorials out there.
Bro preprocess the data properly
@@sumankumarsahu9711 i followed the eaxct way he showed here
@@PulakKabir .corr(numeric_only=True)
Fixed the correlation portion at least
I am stuck at "reg.score". please resolve my error
hello, what should I do if my X_test doesn't have any value in ISLAND? I can't perfom the reg.score
thanks for your help
How did you get the .corr() method to ignore the ocean_proximity column even though it had non-numeric values in the beginning??
train_data.corr(numeric_only=True) will do
@@gongxunliu5237 I didn't even know that was a parameter, tysm
@@gongxunliu5237 wow I rewatched the video 10 times to understand how he was able to get past that error and am still lost... I ended up converting the ocean proximity column into an id column prior to running the model... did corr() used to automatically filter out the string columns or something in the past?
@@jonathanitty5701 i think it was either that, or the default value changed from True to False, not sure which
Great tutorial! One correction at 12:45 - longitude is inveresely correlated with latitude rather than the median house income.
How did you fix it
For those in the comments section, never do inplace=True.
why?
What should we do to substitute that?
True
@@skripandthes
You are making changes into the dataframe you can't reverse unless you restart the whole runtime on your workspace. Like jupyter notebook.
@@olanrewajuatanda533
Just define a new dataframe.
Instead of doing this:
Df.dropna(col, axis=1, inplace=true)
Do this:
Df = Df.dropna(col, axis=1)
This way you don't hard code new changes to the dataframe and you can just edit the cell and run it again to correct any mistakes.
What's the interpretation of the "score"? Is it R-squared for regression? How about for random forests? Do they compare from one model to another?
So how do you find the working details of the model? It's great to know the 'score' is 0.8 or whatever but what parameters are used to get that 0.8? In other words, I train a model with a score of 0.8 then get some new data points (lat, long, #bedrooms, total_bedrooms, etc (all except house price)) What's the equation I use to generate an expected house value and where do I get it?
Great video though.
The model/function is made by the algorithm and that cannot be inferred. All we can do is put the values parameters and get the prediction.
@@Ailearning879 but can you please help me where to test the model which is trained? since we only got the model's accuracy or score. And I'm a beginner in ML
tahts a great video, but how do i get the predicted values now? I mean i built the model and how would i get predictions?
11:50 I got an error using corr() because of non-numeric column 'ocean_proximity'. How did you do it? Did you change the code of pandas?
Edit: I found it myself. Go to python installation path/libraries/pandas/core/frame.py
Go to corr function definition and set numeric_only: bool = True.
Thanks bro
At 13:00 why didn't you apply np.log to 'median_income' and 'median_house_value'? They seem pretty skewed as well
thanks for the great project!
Great video. Apart from Linear Regression and Random Forest, are there any other algorithms that might be suitable for this type of problem?
Naive bayes, Gaussian naive bayes, KNN, Decision tree(Randomforest is collection of decision trees), gradient boosting and XGBoost.
Try every one of them with different different parameters for each and select the best one with best set of parameters
wish you had also showed some graphs that we can produce once the regression is done
BTW, how do you copy and paste so quickly around minute 14 when you were doing the 'log' adjustment on the train_data? Which shortcut are you using?
alt + shift + down arrow key.
Thank you for nice explanation. Keep this good work. I want to know what is the outcome of this model. What insight I got after run the model.
what if im missing a column ISLAND?
I found that I could increase the test_size from 0.2 to 0.25 or until it became large enough that it included the island by change. Not a real fix but works for this. Take care
In X test I am getting 14 col while in X train I am getting 15 cols what should I do?
Add one more blank column / variable to test which gonna be your target variable
@@parth1211 how to do that?
Hey hav u solved this error
Saw this as how to build project , this is my first one , let's see where this will take me - 1.
Informative video, quick question why would you not want the values to be zero when taking the log of the values?
Because log(0) is undefined. That is, you cannot raise a number to a power to get 0.
11:45 use the test_data.corr(numeric_only=True) instead as this will return an error if you do so. I do not understand how did you not get an error?
I got this and had to apply the function above to solve it " ValueError: could not convert string to float: 'NEAR OCEAN'"
16:57 Second Problem I ran into if anybody can help, pd.get_dummies(train_data.ocean_proximity) retuns True & False instead of 1&0 s
@@marawanmyoussefsame here 😢
This problem can be solved by chatgpt but later it creates a problem 🥲
I guess you mean by train_data.corr(numeric_only=True) because test isn't defined yet correct me if I'm wrong
thank you so much
my ISLAND column gets deleted when creating test_data - any way to fix this?
sameeee
At minute 28:40 line "31" I typed the same "reg.score(X_test, y_test)" but it does'nt work. The ValueError is "Input X contains NaN."
What I did wrong? Can anyone help me? I would like to complete this project. Thank you
run all cells again
@@samarthamera doesn't work
@@imansaid2321did you figure it out? It’s not working with me
thank you !!!
it was really helpful
Every thing was great but the fact that ive to debugg my entire code because we split earlier and had to pre process the test data again was so painfull speacially in jupyter lab
Thanks for the vid! First day on ur chanel really happy found u!
And it seems you use a sort of autocompite for typing when on terminal? or ur typing is just soo fast..
sir i am getting -1.25 score!
what to do now!
where can i get total code
Timestamp : 20:00
Grat job!
Hi. Very well explained! thank you.
I can't get over you sir
You are a legend
Randomforest algo takes features at random so if we literally change nothing and fit the model again and again we can see the scores changing(+-2%).
Also only one variable median income was strongly related with target(bcoz it had correlation>0.5).
If many variables would have been above 0.5 then we might had seen drastic changes during gridsearch min_features
why you said this is classification at 39:39 when it is regression problem ?
🤯 Great video.
Hi, how did you join the train data and still get the correct values on the median_house_value. I got NaN here. thanks!
Excellent tutorial...
is there a link to the pyhton notebook?
where can i get the notebook? i tried searching your gihub repository but dont see any related to house price prediction. Can you please share the notebook?
why do we need normal distribution in total-rooms, population...?
at minute 16:53 I am facing this issue were it suppose to provide the output with binary values instead it is displaying bool values is there anyway I can covert the values from boolean to binary?
I'm having the same issue is there any fix?
df =
pd.get_dummies(train_data.ocean_proximity)
print(df)
df=df.replace({True:1, False:0})
print(df)
Explained everything perfectly, Your channel is going to be my go to channel, to learn data science!!!
Hey bro! Can you please guide me in number prediction in a specific position by reading existing excel data!? I wanted to generate 6 numbers with this logic
May I ask why the longitude and longitude are not applied encoding?
Can you add custom code so that model predict saleprice when input code is given
How did you get 0.66 score? I made similar data transformations and got only 0.25 score and 0.78 MSE
Hi! How did you get those Vim bindings in jupyter?
Is it just me who's getting the error "Input contains NaN, infinity or a value too large for dtype('float64')"? For both linear as well as random forest
when you define the X_test_s ?? when i want to scaling i should use the X_test_s AS your code but i gets error i have not X_test_s
no matter what i do i cant get the join method
same here
Hi NeuralNine. I am having doubt in executing the corr() function. How can I move forward?
try to put as corr (numeric_only = True)
sorry to say but in my code "ocean proximity"is not shown.
how to get same dataset? where?
So. where exactly is the ¨machine learning¨ part? All I saw were regressions.
Regression IS machine learning. When you predict categories or classes it is called classification. When you predict numeric values, we call this regression. Even if you use complex neural networks it is still regression. But not necessarily linear regression, which might be what you are thinking about. Random Forests are also non-linear.
hello there, can i ask for your help to make data preprocessing for a specific dataset. it have 53884 rows and 8 columns..
Man! Your computer runs effortlessly😅 It's soo smooth...
What are the specs? 😅
I need to get one like that.😂
great video. and o my wat is the intro music. im a music artist and would love to hear the full thing.
Thanks for the vid
I don't have the ISLAND column when i do the X_test join y_test and so i get errors. how do i fix that?
Also having this issue
from sklearn.model_selection import StratifiedShuffleSplit
stratify_col = df['ocean_proximity']
stratified_split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
for train_index, test_index in stratified_split.split(X, stratify_col):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
As of this writing, I am not able to find the exact data set (.csv file ) for Californian house prices. If some one can provide me with the link for the same one used in this video this will be greatly appreciated!
2330 """
ValueError: columns overlap but no suffix specified: Index(['longitude', 'latitude', 'housing_median_age', 'total_rooms',
'total_bedrooms', 'population', 'households', 'median_income',
'ocean_proximity'],
dtype='object')
i got this error when i tried to join the train data ,that goses like this ( train_data= x_train.join(y_train)). now how do i solve this.
There should be no overlap, your X data are your 'features' - the attributes that your model uses to make a prediction of y 'labels'. In this scenario, the features are things like long, lat, bedrooms, population etc.. the label is the median house price because that is the value you want to predict.
You have to drop the median house prices column from the X data frame and assign that column to the y variable. Then once you join X and y, you shouldn't have any overlaps
Can you upload the data path over here
it was great thank you a lot bro.
Nice, ty
plt.figure(figsize=(15,8))
sns.scatterplot(x='latitude', y='longitude', df = train_df, hue='median_house_value', palette='coolwarm')
this line of code is note working. its showing ValueError: Could not interpret value `latitude` for parameter `x`
how can i fix this?
ask chat gpt
i got a value error when I used .corr() on my train data. something along the lines of not being able to convert the str into int. so I am unable to make a heat map. I am an absolute beginner so can someone please help me out. anything will be well appreciated
What to do if I get notified error
where is the source code of this project I get an some error
I don't know, but errors are generated in my code, though I write exactly same thing as you do . And I have no idea what to do. 😅