For those struggling with the regular expression at 14:57 , you might need to explicitly assign regex = True (based on the FutureWarning displayed in the video). That is: df['Phone_Number'] = df['Phone_Number'].str.replace('[^a-zA-Z0-9]', '', regex=True)
Fan from India I just got 2 offers from very good companies thanks to your videos and it helped me transition from a customer success support to Data Analyst
For splitting the address at 21:29, you may want to add a named parameter to the value of 2, as in n=2: df[["Street_Address", "State", "Zip_Code"]] = df["Address"].str.split(',', n=2, expand=True)
OMG! Thank you so very much. I have been trying to figure this out for about four days now. I figured out the phone number issue and then how to split the address, but for the life of me splitting the address into named columns with the changes committed the df was not working. THANK YOU!
I like how in some of your videos you show us the long way and then the short cut, instead of just showing the short cut. I think that way gives the person who is learning a better breakdown of what they are doing.
This is one of the best videos regarding data cleaning I have ever watched. Really crisp and covers almost all the important steps. It also dives deep into concepts that are really important, but you rarely see anybody applying them. Must watch for everybody, who is looking to get into data field or are already in the field.
Thanks for this content, this was so helpful!! I think i have some optimizations, correct me if im wrong :D 27:04 instead of calling the replace function multiple times, you can create a mapping just like: replace_mapping = {'Yes': 'Y', 'No': 'N'} and call it like: df = df.replace(replace_mapping), so you dont have to specify mapping for each column and need to call .replace() just once. 34:16 instead of the for loop + manually dropping row per row, you can make use of the .loc function like: df = df.loc[df["Do_Not_Contact"] == "N"] in order to filter the rows based on filter criterium.
I really like when you make mistakes, because it tells that no one perfect. I sometimes anxious when I watch tutorials and they seem to be so good. You also implicate the struggles that you experiencing throughout the process is real. Thanks for the tutorial Alex.
Found this REALLY helpful! I love how you walk us through mistakes as well as explain WHY you do what you do throughout your videos. It adds so much value to each video. As always, THANK YOU ALEX!!
Thank you for the data example, now I can connect all the code snippets that I learned individually and can finally use them together in your example! Really one of the best exercises I have found so far! Thank you so much, Alex!!
If you're getting an error when trying to split the address, this is what worked for me; I had to remove the number of values to look for. df[["Street_Address", "State", "Zip_Code"]] = df["Address"].str.split(',', expand=True)
Some of the phone numbers are removed while doing the formatting. If you look in the excel file, you'll see that some of the numbers are strings and some are integers. When you run the string method during the formatting, it replaces the numeric values with NaN and they are later removed completely. If you want to avoid losing that data you'll need to use df["Phone_Number"] = df["Phone_Number"].astype(str) before formatting. You also won't need to convert to string in the lambda after doing this.
If you want to replace the empty values in No Not Contact you'll need to use df["Do_Not_Contact"].astype(str).replace("","N") Technically those values are not empty, they are NaNs which is why replace is giving them 'NNN' instead of just the one 'N'. It's treating it as if NaN equals three blank spaces
I've been struggling with Pandas a bit and this video cleared some things for me! what frustrates me from the way my teachers would teach Pandas, their solutions are sometimes too efficient, in the sense that a student that started from zero who's taking an exam, will never be able to come up with these hyper efficient and elegant one-liners in their code. what I appreciate in your video is how you achieve the same results, but in a way that a beginner can easily remember and apply on an exam. thank you! I'll be checking out more of your videos.
instead of applying lambda function to convert Phone_Number column elements to string , we can also use df['Phone_Number'] = df['Phone_Number'].astype(str) and use dictionary as an argument to be passed inside replace method to avoid Yes becoming YYes df['Paying Customer']= df['Paying Customer'].replace({'Y':'Yes','N':'No'})
I discovered that replace() has an argument regex (regular expression). It is set as regex = True but when we change it to regex = False, it only looks for exact matches, meaning it won't change 'Yes' to 'Yeses', only 'Y' to 'Yes'. We can write df["Paying Customer"].replace('Y', 'Yes', regex = False) and it will work as expected.
thank you for your work Alex! I went through the entire video 1 by 1 twice and I can tell I learned a lot from this video , finally understanding why we need to learn Loops etc. and how simple cleaning methods work on Jupyter.
Hello Alex, thank you for such a wonderful tutorial . I have one suggestion regarding the last part where you are filtering # Filtering the Data with "Do_Not_Contact" Column with N and " " Filter1 = df["Do_Not_Contact"]=="N" Filter2= df["Do_Not_Contact"]=="" df[Filter1 | Filter2]
After making it this far through the course over the last 2 months, looking at these last 4 videos I'm getting strong final exam vibes. Python has not felt intuitive to me at all, but I recognize its value. I guess it feels like taking Spanish 1 and having Spanish 2 tests. I'm definitely looking forward to applying what I've learned here to solidify the lessons more. I'm contracting for a company already and writing a proposal for them to transition to My SQL Server. I guess the fact that I feel overwhelmed with all the info means I'm actually learning how little I actually know, which is a good thing for growth in the long run. Rambling here, but I am incredibly thankful for the course, Alex.
at 15:19 i would like to say something. in the new version from jupyter, if u write the code from alex the data will be same. To fix this, u can input regex = True after the ''. CODE: df['Phone_Number'].str.replace('[^a-zA-Z0-9]', '', regex = True). But overall i can't say anything except thank u alex for this awesome tutorial !!!!
For the address column: df[["Street_Address", "State", "Zip_Code"]] = df["Address"].str.split(",", n=2, expand = True). Defining only 2 was giving me an error. so i had to change it to n=2
Alex, I loved the Video. It have Correct Explanation. Thank you so much for your Video. There is a Small Mistake while you are typing #Another Way to drop null value df.dropna(subset='Column_name',inplace = True). I hope you will notify the Error. Thank you. Have a Great day!
Thanks a lot Alex for the video ! This was exactly what I was looking for. May I request you to try and upload video on how to write Python ETL code which uses table in a cloud database like snowflake, saves it in a csv format, transforms it and then again uploads it on snowflake. And all these steps are being captured in a log file which is in txt format !
Hey, Alex, I just Started your Pandas Tutorial, and I was waiting for Data Cleaning video, when i open my UA-cam, First your Video is seen.. This is boon for me 😇🥺 Thanks, I hope you will Upload Matploib, Numpy and Many More Libraries video ❤🤗
Thank you Alex for this detailed breakdown. Just a side note for those who don't like to use loops e.g. for, while For 31:00, you could do the following code 'df.drop(df[df['Do_Not_Contact'] == 'Y'].index, inplace=True'
Using regular expressions for manipulating data is beneficial because it allows you to change strings as needed, especially when dealing with different types of strings.
Hi Alex, idk if you will see this comment. So I was doing the same codes, and I noticed when you eliminated the characters for the phone numbers at 14:57 you also deleted the phone numbers that did not have any characters in them. You can see that at index 3 for Walter White, before he had a phone number but after he had NaN. If you can tell me how to correct it, it would be very great. I also never commented on your videos, but i like them very much, they are very good, and helpful. Thanks for everything
Not sure if you're still looking for a solution, but from some online searching, I found a solution to avoid deleting phone numbers that did not have any error/contain no characters, by adding .astype(str) before .str.replace, this seems fix the issue and the code should look something like this: df["Phone_Number"] = df['Phone_Number'].astype(str).str.replace('[^a-zA-Z0-9]','',regex=True) Also note you'll have to add in regex=True manually. Maybe it's deleting as it somehow interpret whole number as non-numeric and deleting it erroneously, not 100% sure tho, still a beginner, and it might cause issue with other types of data.
@@GlennLee-qz4st for me, walter white's telephone number is being deleted before the str.replace instruction is written. it's deleted as soon as i run df['Last_Name'] = df['Last_Name'].str.lstrip('...') df['Last_Name'] = df['Last_Name'].str.lstrip('/') df['Last_Name'] = df['Last_Name'].str.rstrip('_') for some reason.
Very well done! Great video. I am working on analyzing and cleaning scraped data from web and this guide is helpful, especially where you mentioned the mistakes.
Thanks for the detailed tutorial Alex. I was wondering, if i wanted to become a data scientist instead of a data analyst, would you recommend any people in the industry who I should follow? F.e is there an Alex the Data Scientist out there?😄
in Last_Name columns we can used replace function in order remove regular expression like ( ./-) code: df["Last_Name"]= df["Last_Name"].str.replace("[./_]","" ,regex= True)
Hi Nice explanation. But in this data cleaning you have simply remove NA values. But as per my understanding we need to fill NA values, I am not clear about the logic to fill in. If you can provide video on how to fill NA values it will help us a lot. Thanks Abhinav
Great video thanks! Can’t help thinking that tools like chatGPT, github copilot al, GPT engineer can pretty much tell you how to/do this all for you so maybe I am wasting my time learning this 😅
Hey Alex ! this video is so helpful . At 32.30 instead of using for loop I think we can use this df1=df[(df['Do_Not_Contact']=='N') &(df['Phone_Number']!='')] to get the same result.
at about 33:54, whoa! unless you were specifically told to do this, you are altering the data! Changing no value to 'N' is a no-no unless you have been told to do so. Otherwise you're adding information that was not there. We don't know if Harry Potter wants to be contacted or not and that's probably for someone above our pay grade to decide! :D
For those who want to replace Y => Yes , N => No, just need to remove .str and use only replace, like this df["Paying Customer"] = df["Paying Customer"].replace({'Y': 'Yes', 'N':'No'}) df["Do_Not_Contact"] = df["Do_Not_Contact"].replace({'Y': 'Yes', 'N':'No'}) df
# Step 1: Convert to string and clean non-digit characters beta['Phone_Number'] = beta['Phone_Number'].apply(lambda x: ''.join(filter(str.isdigit, str(x))) if pd.notna(x) else x) # Step 2: Format the phone number to xxx-xxx-xxxx if it is exactly 10 digits long beta['Phone_Number'] = beta['Phone_Number'].apply(lambda x: f'{x[0:3]}-{x[3:6]}-{x[6:10]}' if pd.notna(x) and len(x) == 10 else x) print(beta)
Guys, I tried this for the paying customer column df2['Paying Customer'] = df2['Paying Customer'].apply(lambda x: 'Yes' if x == 'Y' else x) df2['Paying Customer'] = df2['Paying Customer'].apply(lambda x: 'No' if x == 'N' else x)
Instead of striping each symbols one by one in 9:11 i think its better to use characters_to_remove = ['/','...','_'] for x in characters_to_remove: df["Last_Name"] = df["Last_Name"].str.strip(x)
29:42 Just sharing my approach to remove the "don't call" rows df = df[df['Do_Not_Contact'] != 'Y'] You can apply this to the missing phone number and the rest as well.
In the case of column Phone_Number with all the variant of NaN, first "stringuify" the column, and after do the format thing and then replace with nothing all the content of the column when the content contains 2 - Thank you!
Please , Please , Please Alex we need to know everything in depth everything about the new product Microsoft Fabric, and how this will impact on the industry and it's time to convert from Mac to Windows in sake of MS Fabric
For those struggling with the regular expression at 14:57 , you might need to explicitly assign regex = True (based on the FutureWarning displayed in the video). That is:
df['Phone_Number'] = df['Phone_Number'].str.replace('[^a-zA-Z0-9]', '', regex=True)
gosh you're observant
Thank you!
My goodness. You saved me. I’ve been at this for about an hour. Thank you 🙏 thank you 🙏
Thanks a lot dude !!!!!! Helped a lot !!!!!!!
Legend.
Fan from India I just got 2 offers from very good companies thanks to your videos and it helped me transition from a customer success support to Data Analyst
Hey tell me how can I do it too ri8 now I'm working as a customer support executive please help me to grow..
hey Rahul, how do you learn DA ? Can you share your experience it will be helpful for us!!
Hi bro is this course sufficient for beginner to land a job
Is this a spam comment?
@rozakhan2811 skills need is a basic thing...what you want..in that be strong..And way of Alex Teach Videos are Effective..
For splitting the address at 21:29, you may want to add a named parameter to the value of 2, as in n=2:
df[["Street_Address", "State", "Zip_Code"]] = df["Address"].str.split(',', n=2, expand=True)
This helps! Thank you so much!
Thank you very much
thank you very much
Thank you!
OMG! Thank you so very much. I have been trying to figure this out for about four days now. I figured out the phone number issue and then how to split the address, but for the life of me splitting the address into named columns with the changes committed the df was not working. THANK YOU!
I like how in some of your videos you show us the long way and then the short cut, instead of just showing the short cut. I think that way gives the person who is learning a better breakdown of what they are doing.
This is one of the best videos regarding data cleaning I have ever watched. Really crisp and covers almost all the important steps. It also dives deep into concepts that are really important, but you rarely see anybody applying them.
Must watch for everybody, who is looking to get into data field or are already in the field.
Glad to hear it!
what seems to be a daunting task at the beginning turns out to have an easy explanation with the right tools, thank you Alex !!!
Thanks for this content, this was so helpful!!
I think i have some optimizations, correct me if im wrong :D
27:04 instead of calling the replace function multiple times, you can create a mapping just like: replace_mapping = {'Yes': 'Y', 'No': 'N'} and call it like: df = df.replace(replace_mapping), so you dont have to specify mapping for each column and need to call .replace() just once.
34:16 instead of the for loop + manually dropping row per row, you can make use of the .loc function like: df = df.loc[df["Do_Not_Contact"] == "N"] in order to filter the rows based on filter criterium.
Where did you learn that you could use a dictionary format to replace multiple values in one line? this is really useful, thanks!
Thank You. 34:16 is really helpful. I appreciate your kindness.
I really like when you make mistakes, because it tells that no one perfect. I sometimes anxious when I watch tutorials and they seem to be so good. You also implicate the struggles that you experiencing throughout the process is real. Thanks for the tutorial Alex.
Found this REALLY helpful! I love how you walk us through mistakes as well as explain WHY you do what you do throughout your videos. It adds so much value to each video. As always, THANK YOU ALEX!!
I am studying Data Collection and Data Visualization at Kings College, your channel is reccomned by our lecturers to understand data cleaning.
Thank you for the data example, now I can connect all the code snippets that I learned individually and can finally use them together in your example!
Really one of the best exercises I have found so far!
Thank you so much, Alex!!
Thank you sir, you can't imagine how i fill confident in cleaning data after completing this video with real data practices. Thank you once again.
If you're getting an error when trying to split the address, this is what worked for me; I had to remove the number of values to look for.
df[["Street_Address", "State", "Zip_Code"]] = df["Address"].str.split(',', expand=True)
df[["Street_Address", "State", "Zip_Code"]] = df["Address"].str.split(pat=',', n=2, expand=True) use this you have to include pat
thank you!
what does that exactly?
Some of the phone numbers are removed while doing the formatting. If you look in the excel file, you'll see that some of the numbers are strings and some are integers. When you run the string method during the formatting, it replaces the numeric values with NaN and they are later removed completely. If you want to avoid losing that data you'll need to use
df["Phone_Number"] = df["Phone_Number"].astype(str)
before formatting. You also won't need to convert to string in the lambda after doing this.
If you want to replace the empty values in No Not Contact you'll need to use
df["Do_Not_Contact"].astype(str).replace("","N")
Technically those values are not empty, they are NaNs which is why replace is giving them 'NNN' instead of just the one 'N'. It's treating it as if NaN equals three blank spaces
that's what i've noticed too, great work
You are a genius, thanks :)
Thanks man, this worked.
Obrigado ! Estava observando isso no meu dataframe e não entendia porque estava acontecendo !
I enjoyed working on this project. Thank you Alex and a huge thank you to those guys who helped in the struggling minutes!
I've been struggling with Pandas a bit and this video cleared some things for me!
what frustrates me from the way my teachers would teach Pandas, their solutions are sometimes too efficient, in the sense that a student that started from zero who's taking an exam, will never be able to come up with these hyper efficient and elegant one-liners in their code. what I appreciate in your video is how you achieve the same results, but in a way that a beginner can easily remember and apply on an exam. thank you! I'll be checking out more of your videos.
instead of applying lambda function to convert Phone_Number column elements to string , we can also use
df['Phone_Number'] = df['Phone_Number'].astype(str)
and use dictionary as an argument to be passed inside replace method to avoid Yes becoming YYes df['Paying Customer']= df['Paying Customer'].replace({'Y':'Yes','N':'No'})
man lets go,you are our hero who can not afford paid courses
I discovered that replace() has an argument regex (regular expression). It is set as regex = True but when we change it to regex = False, it only looks for exact matches, meaning it won't change 'Yes' to 'Yeses', only 'Y' to 'Yes'. We can write df["Paying Customer"].replace('Y', 'Yes', regex = False) and it will work as expected.
mine didnt work lol
mine also didn't work
works with a lambda function
df['Do_Not_Contact'] = df['Do_Not_Contact'].apply(lambda x: 'Yes' if x == 'Y' else x)
thank you for your work Alex! I went through the entire video 1 by 1 twice and I can tell I learned a lot from this video , finally understanding why we need to learn Loops etc. and how simple cleaning methods work on Jupyter.
This is the best video I have ever watched on data cleaning using pandas.. even the mistakes were good to learn from.
This is really very important to both the beginners and pro. Kudos!!
Also, to clean the Do_Not_Contact field, one can use: df['Do_Not_Contact'] = df['Do_Not_Contact'].replace({'N': 'No', 'Y': 'Yes'})
Hello Alex, thank you for such a wonderful tutorial . I have one suggestion regarding the last part where you are filtering
# Filtering the Data with "Do_Not_Contact" Column with N and " "
Filter1 = df["Do_Not_Contact"]=="N"
Filter2= df["Do_Not_Contact"]==""
df[Filter1 | Filter2]
After making it this far through the course over the last 2 months, looking at these last 4 videos I'm getting strong final exam vibes. Python has not felt intuitive to me at all, but I recognize its value. I guess it feels like taking Spanish 1 and having Spanish 2 tests. I'm definitely looking forward to applying what I've learned here to solidify the lessons more. I'm contracting for a company already and writing a proposal for them to transition to My SQL Server. I guess the fact that I feel overwhelmed with all the info means I'm actually learning how little I actually know, which is a good thing for growth in the long run. Rambling here, but I am incredibly thankful for the course, Alex.
Alex your are the GOAT! for real thank you for all the tutorials and your help for everyone who want's to become a data analyst1
Glad to do it! :D
at 15:19 i would like to say something. in the new version from jupyter, if u write the code from alex the data will be same. To fix this, u can input regex = True after the ''. CODE: df['Phone_Number'].str.replace('[^a-zA-Z0-9]', '', regex = True). But overall i can't say anything except thank u alex for this awesome tutorial !!!!
thanks buddy
thanks!!
For the address column: df[["Street_Address", "State", "Zip_Code"]] = df["Address"].str.split(",", n=2, expand = True). Defining only 2 was giving me an error. so i had to change it to n=2
This helped me, thank you! However, what does '"n" mean?
n=2 parameter indicates that the split should occur at most two times, producing three resulting parts.@@DreaSimply21
Thank you for this. It helped me a great deal
Best video available on internet so far for data cleaning in Pandas. Best explanation. 😇😇
Alex, I loved the Video. It have Correct Explanation. Thank you so much for your Video.
There is a Small Mistake while you are typing
#Another Way to drop null value
df.dropna(subset='Column_name',inplace = True). I hope you will notify the Error.
Thank you.
Have a Great day!
Oh my.. I am going to watch every single video you created..
Thanks a lot Alex for the video ! This was exactly what I was looking for. May I request you to try and upload video on how to write Python ETL code which uses table in a cloud database like snowflake, saves it in a csv format, transforms it and then again uploads it on snowflake. And all these steps are being captured in a log file which is in txt format !
vouching for this @Alex. It'd be really appreciated TIA
Hey, Alex, I just Started your Pandas Tutorial, and I was waiting for Data Cleaning video, when i open my UA-cam, First your Video is seen.. This is boon for me 😇🥺 Thanks, I hope you will Upload Matploib, Numpy and Many More Libraries video ❤🤗
In the future, yes :)
My fav thing to do in pandas, thanks for making tutorial.
Thank you for this video. I just finished this part of the data analytics course and I definitely learned something new and helpful.
Simply amazing! Well-explained and comprehensive. Loved it!
And I was already looking for some Pandas tutorial. Thank you, Alex, this was much needed. :)
Glad to help!
Thank you Alex for this detailed breakdown. Just a side note for those who don't like to use loops e.g. for, while
For 31:00, you could do the following code 'df.drop(df[df['Do_Not_Contact'] == 'Y'].index, inplace=True'
I'd say that's complicating the code. You can simply do
df = df[df['Do_Not_Contact'] != "Y"]
@@LuisRivera-oc6xh i literally use this at the first time learning pandas myself
df = df.drop(df[df['Do_Not_Contact'] == 'Y'].index)
df = df.drop(df[df['Do_Not_Contact'] == ''].index)
OR
df = df[df['Do_Not_Contact'] == 'N']
Hey alex, we don't need to take any course because you are there 😉
I am doing your bootcamp of becoming a data analyst
Do it! I try my best to bring the best free content I can :)
Great Pandas data cleaning video. Thank you very much for sharing your knowledge.
Using regular expressions for manipulating data is beneficial because it allows you to change strings as needed, especially when dealing with different types of strings.
I was intimidated by the Machine learning module but now I am not. Thanks a lot dude
Your explanation was super cool
Hi Alex, idk if you will see this comment. So I was doing the same codes, and I noticed when you eliminated the characters for the phone numbers at 14:57 you also deleted the phone numbers that did not have any characters in them. You can see that at index 3 for Walter White, before he had a phone number but after he had NaN. If you can tell me how to correct it, it would be very great. I also never commented on your videos, but i like them very much, they are very good, and helpful. Thanks for everything
Not sure if you're still looking for a solution, but from some online searching, I found a solution to avoid deleting phone numbers that did not have any error/contain no characters, by adding .astype(str) before .str.replace, this seems fix the issue and the code should look something like this:
df["Phone_Number"] = df['Phone_Number'].astype(str).str.replace('[^a-zA-Z0-9]','',regex=True)
Also note you'll have to add in regex=True manually.
Maybe it's deleting as it somehow interpret whole number as non-numeric and deleting it erroneously, not 100% sure tho, still a beginner, and it might cause issue with other types of data.
@@GlennLee-qz4st for me, walter white's telephone number is being deleted before the str.replace instruction is written. it's deleted as soon as i run
df['Last_Name'] = df['Last_Name'].str.lstrip('...')
df['Last_Name'] = df['Last_Name'].str.lstrip('/')
df['Last_Name'] = df['Last_Name'].str.rstrip('_')
for some reason.
Very well done! Great video. I am working on analyzing and cleaning scraped data from web and this guide is helpful, especially where you mentioned the mistakes.
Thank you Alex. That Lambda example is going to be very useful.
Glad to hear it! :D
Thank you Alex. Your videos are very helpful. Now I can resume cleaning my data.
Timestamp 32:42. I simply use
#Filter out "Do_Not_Contact" == "Yes"
df[df['Do_Not_Contact']!='Yes']
Thank you so much sir i have start my data cleaning from you From india 💌
Thanks Alex, Please post more videos.
You are great, Alex. Your teaching skills excellent.
Thanks! 😃
Thank you, this is most elborative and simplest videos i saw
thanks for your effort making this amazing video. It helps me alot. I've been struggling on Data cleaning and your video is helpful
Thank you Alex for this video on data cleaning with pandas. It is very detailed and explanatory
If any one is getting an error on df['Address'].str.split(",",2, expand=True), you can omit 2 and use df["Address"].str.split(",", expand=True)
@sdivi6881 Thank you so much 😊😊😊
Great video mam, need more this type of tutorials
For explanation purposes, it is great.
For getting the final result, I would have done differently though
Thanks for the detailed tutorial Alex. I was wondering, if i wanted to become a data scientist instead of a data analyst, would you recommend any people in the industry who I should follow? F.e is there an Alex the Data Scientist out there?😄
Many thanks for the dataset+code+video!!! 🔥🔥
The video I needed to have a realistic practice in data cleaning.thanks
I not only survived! on 20:46 you can place AND in .replace('nan--' AND 'Na--' , ' '). Thank you 1:1
in Last_Name columns we can used replace function in order remove regular expression like ( ./-)
code:
df["Last_Name"]= df["Last_Name"].str.replace("[./_]","" ,regex= True)
OMG Thank youuuu!!! I knew someone on here had to know the answer to how to use regex lol.
Thanks
Great video! I enjoyed learning from you! Thanks for making things easier to understand
Thanks for the video. Helped a lot in understanding Pandas.
Hi
Nice explanation. But in this data cleaning you have simply remove NA values. But as per my understanding we need to fill NA values, I am not clear about the logic to fill in. If you can provide video on how to fill NA values it will help us a lot.
Thanks
Abhinav
Great video thanks! Can’t help thinking that tools like chatGPT, github copilot al, GPT engineer can pretty much tell you how to/do this all for you so maybe I am wasting my time learning this 😅
Not an analyst (never wanted to be), but it was very interesting. Thanks!
Hey Alex, Thanks for the super content ...!
Your work are amazing. Thank you so Much
Hey Alex ! this video is so helpful . At 32.30 instead of using for loop I think we can use this df1=df[(df['Do_Not_Contact']=='N') &(df['Phone_Number']!='')] to get the same result.
38:36 df[['Street_address','State','Zip_code']]=df['Address'].str.split(" ",n=2, expand=True)
at about 33:54, whoa! unless you were specifically told to do this, you are altering the data! Changing no value to 'N' is a no-no unless you have been told to do so. Otherwise you're adding information that was not there. We don't know if Harry Potter wants to be contacted or not and that's probably for someone above our pay grade to decide! :D
For those who want to replace Y => Yes , N => No, just need to remove .str and use only replace, like this
df["Paying Customer"] = df["Paying Customer"].replace({'Y': 'Yes', 'N':'No'})
df["Do_Not_Contact"] = df["Do_Not_Contact"].replace({'Y': 'Yes', 'N':'No'})
df
A Glorious Thank You!! Please Keep This UP!!!!
Very helpful, and well explained.
# Step 1: Convert to string and clean non-digit characters
beta['Phone_Number'] = beta['Phone_Number'].apply(lambda x: ''.join(filter(str.isdigit, str(x))) if pd.notna(x) else x)
# Step 2: Format the phone number to xxx-xxx-xxxx if it is exactly 10 digits long
beta['Phone_Number'] = beta['Phone_Number'].apply(lambda x: f'{x[0:3]}-{x[3:6]}-{x[6:10]}' if pd.notna(x) and len(x) == 10 else x)
print(beta)
Thanks for this absolutely great video.
Guys, I tried this for the paying customer column
df2['Paying Customer'] = df2['Paying Customer'].apply(lambda x: 'Yes' if x == 'Y' else x)
df2['Paying Customer'] = df2['Paying Customer'].apply(lambda x: 'No' if x == 'N' else x)
Instead of striping each symbols one by one in 9:11 i think its better to use
characters_to_remove = ['/','...','_']
for x in characters_to_remove:
df["Last_Name"] = df["Last_Name"].str.strip(x)
Super Explanation Thanks
Python is so fun
29:42
Just sharing my approach to remove the "don't call" rows
df = df[df['Do_Not_Contact'] != 'Y']
You can apply this to the missing phone number and the rest as well.
Man i love the comments section. Thank you for sharing this. This is a very simple method.
@@Charlay_Charlay glad that helped! You're welcome!
Nice one Alex. Don't forget to add comments to the code! 🙂
lol for sure!
Thank you soo much sir you're really a great professor 👏❤
thank you very much. your video helped me a lot. good luck
Really u fone a good job i became a big fan of u thank u so much for doing this
Thank you so much, Alex. You are the Best
Really enjoyed the video
I'm in love with ur videos
Amazing explanations!
In the case of column Phone_Number with all the variant of NaN, first "stringuify" the column, and after do the format thing and then replace with nothing all the content of the column when the content contains 2 -
Thank you!
df["Phone_Number"].str.replace('[^A-Za-z0-9]', '', regex=True)
Great stuff ! Do a collab with Rob Mulla !
Thank you Mr Alex
How you are at 23:27 apply the changes and go back to the previous steps in Jupiter notebook
very well explained video thank youuuu
for those struggling on 33:55
df['Do_Not_Contact'].replace('', pd.NA, inplace=True)
df['Do_Not_Contact'].fillna('N', inplace=True)
Pol miliona złotych i przeprosiny publiczne
Adwokat będzie rozmawiał nie ja
Tak łatwo komuś życie spieprzyć?? Tak łatwo??? Więc oko za oko..
Okay maybe I was not right for the previous comment but instead using replace, you can just use if else function
Please , Please , Please Alex we need to know everything in depth everything about the new product Microsoft Fabric, and how this will impact on the industry and it's time to convert from Mac to Windows in sake of MS Fabric
Favorite thing to do is to come to the comments section for any errors that don't make sense to me.
Thank you a lot, Alex! ^^