It's probably worth noting that pandas 2.0 (which is due to be released soon) implements the Apache arrow backend, which brings it in line with polars and massively reduces memory usage for non numeric data types
And enable missing values for all types 😍 Until now, due to numpy types, integers with missing values had to be cast to float which is pretty annoying and inefficient.
Great video as always 👍 Two optimizations, if I may 🙏, as I know the lib. You're reading the file and detecting types twice in your final code, which isn't efficient. You could read it once and avoid any type inference attempt: 1. You are loading the CSV twice: once with bad types to then only use column names, and then only values to better infer types by skipping the first two rows with `read_csv(..., skiprows=2)`. You could actually just do `skiprows=[1]` to read it only once with the desired type inference. When provided with an iterator, read_csv will only skip the rows at provided indices (here the second line). This will load the column names and the required values in one go, while having the improved type inference (as the first line with column names is omitted for type inference by default). 2. You can directly provide the types when reading the CSV, by providing the dict to read_csv like this: `.read_csv(..., dtype=mapping_types_conversion)`. This will prevent pandas from wasting memory and time by trying to infer them. "category" is also a valid type for pandas, and you can therefore add it to your dict so that read_csv builds the desired dataframe from the start. Parsing CSVs is pretty slow and type inference can have a significant memory impact at load time, especially when dealing with big files. Those optimizations will lower your memory footprint and can speed up things a lot.
We have been using “pandera” to define our pandas schemas. It is great to quickly convert the inferred data types to the target types (“coerce”). The key feature being that it can validate pandas against the Pandera schema and issue a report of errors when the pandas dataframe fails validation (either before or after coercion. It also supports checking against predefined constraints, rules, unique keys, etc. which significantly boosts confidence in both sourcing of data (inputs) and transforming to generate output datasets. Totally worth looking into!
Many of the gaps will be (are) solved in the next pandas release... My reco is to better spend time on learning Pandas + Numpy + Scipy instead of putting your time in alternatives like Polars, Modin, Dask, Vaex... In the end, Pandas will close the gap to these alternatives
Worth mentioning it is possible to get huge savings for numeric data types as well, just using the smaller bytes representation for integers and floats. Usually, the min/max value being represented is small enough to be represented with 32 instead of the original 64 bit representation. Int64 -> int32 = 50% less memory But indeed, I am curious to see how he new backend under Pandas 2.0 handles this
Nice video! One minor thing thing that might be worth noting is that pandas converted the postcode to int, so the memory increase is for an int -> categorical. There’s a chance categorical would still be more memory efficient than object
Just a small tip! We can use Categorical type if we know that the number of values that the column holds do not vary much. (For example: Gender --> Male, Female). In these cases, it's ideal to use categorical type to save memory usage.
👍 as a coding-hobbyist, it was fascinating to watch using pandas with the terminal and not jupyter notebooks.. perhaps Arjan could consider a series of vdo's that actually builds a project?
That was very interesting. Another team (not my one - mine was doing JS/PHP) where I used to work did a lot of Pandas, it's nice to get a grasp of what they were doing... It would be good now if you did a part two with some example of what can be achieved with Pandas once the data is efficiently loaded.
Just a heads up: when you want to return a certain type using the .asype(int) function, you'll get a numpy integer type such as np.int64, which many things outside of pandas don't like. You'll still have to do int(thing) to convert it back to a normal python int if your use case needs a regular int outside of pandas.
@ArjanCodes I’ve become obsessed with learning Python and your videos have been incredibly helpful on my journey. 🤓 I have a question/vid suggestion: is there a way to run a Python script within a JavaScript app and have both dynamically update each other? Azgaar’s Fantasy Map Generator is an amazing open source Javascript app and I want to “inject” my own Python apps into it and expand its functionalities. My goal is to utilize and build upon the data the simulator already contains. I want to add modular python systems to use the FMG data (CSVs i think) as a starting point to simulate even more detailed systems (such as a Markov Model simulating weather patterns using Biome data of map cells). One method I think I understand is extracting CSVs from FMG and then importing it into Python and use Pandas (or something better) to parse it all. I believe there must be a better way to create this project. I’m struggling to understand how best to solve this problem, any advice or maybe a video featuring you modding Azgaar’s Fantasy Map Generator? lol
Great tutorial specially last comparison part. what if columns names are dynamic then which way is to handle it properly on 1M records with 20+ columns.
it would be interesting to see the difference in memory consumption vs speed in specifying 32bit vs 64bit. i constantly had slowdowns in numpy code because it assigns 64bit by default if not specified otherwise, but when i used 32bit data (eg: from an image) it would need to convert one of them into the same dtype, which for large arrays was costly. So I always specify 32bit unless i need the accuracy, eg: matrix calculations, svd, polynomial fitting
Just passing by to note that it is possible to specify data types on the read methods. It is always efficient (avoid typing inference processing cycles) and sometimes it is almost necessary; e.g. on the zip codes column you would lose leading zeros if you let pandas read it as integers. You could always `str.zfill(5)` the strings latter, but is it a good design?
What are the general rules of when it makes sense to convert an "object" type column into a "categorical" type? From your example, the ZIP codes may not be the best to convert to categorical type, since there are so many ZIP codes, whereas the State column only has a few values, so it makes sense to convert it from object type to categorical. Are there heuristics to decide when to convert to categorical? Like if the num_unique values is
Compare the num_unique values with the total count of values in the column (or with the row number). Also, have an eye on missing data, like empties, NULLs, Nones, n.a.s and similars for the start (think of it: is it more useful for the use case that it's represented as e.g. Null or a separate category). Then make the conversion and compare the memory usage with the original as in the video. It's quick and beats the heuristics ;-)
Reading from csv having a column of numerical data length greater than 11 are read and saved wrongly as power digits .. how can it be resolved to read csv
polars is the fast inbetween, its kind of like a merge between sql and pandas, it reads more like sql, but you can do stuff in a pandas sort of way, and it may help you understand how its working if one know pandas. Using polars is a lot easier than working with sql in a lot of ways. For example you can select columns dynamically by type or regex etc, easier to pivot/unpivot etc, and you can build expressions from functions making repetitve stuff easier to read than it is in pure sql(since installing a lot of functions with a specific task in a database gets messy after a time).
👷 Join the FREE Code Diagnosis Workshop to help you review code more effectively using my 3-Factor Diagnosis Framework: www.arjancodes.com/diagnosis
It's probably worth noting that pandas 2.0 (which is due to be released soon) implements the Apache arrow backend, which brings it in line with polars and massively reduces memory usage for non numeric data types
And enable missing values for all types 😍
Until now, due to numpy types, integers with missing values had to be cast to float which is pretty annoying and inefficient.
Great video as always 👍
Two optimizations, if I may 🙏, as I know the lib. You're reading the file and detecting types twice in your final code, which isn't efficient.
You could read it once and avoid any type inference attempt:
1. You are loading the CSV twice: once with bad types to then only use column names, and then only values to better infer types by skipping the first two rows with `read_csv(..., skiprows=2)`. You could actually just do `skiprows=[1]` to read it only once with the desired type inference. When provided with an iterator, read_csv will only skip the rows at provided indices (here the second line). This will load the column names and the required values in one go, while having the improved type inference (as the first line with column names is omitted for type inference by default).
2. You can directly provide the types when reading the CSV, by providing the dict to read_csv like this: `.read_csv(..., dtype=mapping_types_conversion)`. This will prevent pandas from wasting memory and time by trying to infer them. "category" is also a valid type for pandas, and you can therefore add it to your dict so that read_csv builds the desired dataframe from the start.
Parsing CSVs is pretty slow and type inference can have a significant memory impact at load time, especially when dealing with big files. Those optimizations will lower your memory footprint and can speed up things a lot.
We have been using “pandera” to define our pandas schemas. It is great to quickly convert the inferred data types to the target types (“coerce”). The key feature being that it can validate pandas against the Pandera schema and issue a report of errors when the pandas dataframe fails validation (either before or after coercion. It also supports checking against predefined constraints, rules, unique keys, etc. which significantly boosts confidence in both sourcing of data (inputs) and transforming to generate output datasets. Totally worth looking into!
More pandas/numpy stuff would be appreciated! What’s the difference between a Pandas object vs. String representation?
any Polars fans around here?
Many of the gaps will be (are) solved in the next pandas release...
My reco is to better spend time on learning Pandas + Numpy + Scipy instead of putting your time in alternatives like Polars, Modin, Dask, Vaex... In the end, Pandas will close the gap to these alternatives
@@ErikS- I love pandas, but is useful to have other tools. Polars is a nice library but quite immature still.
Need to find a use case!
This is golden, used to handle hundred of gbs data, and the memory usage is massive. 64gb of ram not even cut it. Apparently have to use modin.
Worth mentioning it is possible to get huge savings for numeric data types as well, just using the smaller bytes representation for integers and floats.
Usually, the min/max value being represented is small enough to be represented with 32 instead of the original 64 bit representation.
Int64 -> int32 = 50% less memory
But indeed, I am curious to see how he new backend under Pandas 2.0 handles this
Nice video! One minor thing thing that might be worth noting is that pandas converted the postcode to int, so the memory increase is for an int -> categorical. There’s a chance categorical would still be more memory efficient than object
Just a small tip!
We can use Categorical type if we know that the number of values that the column holds do not vary much. (For example: Gender --> Male, Female). In these cases, it's ideal to use categorical type to save memory usage.
👍 as a coding-hobbyist, it was fascinating to watch using pandas with the terminal and not jupyter notebooks..
perhaps Arjan could consider a series of vdo's that actually builds a project?
High-quality content! Amazing explanation. More pandas, please!
More to come!
Thanks Arjan! I didn't know about this categorical data type. Thanks so much!
Happy to help!
Please do one using pandas. Something like a small ml app to crunch numbers. Love your content
That was very interesting. Another team (not my one - mine was doing JS/PHP) where I used to work did a lot of Pandas, it's nice to get a grasp of what they were doing... It would be good now if you did a part two with some example of what can be achieved with Pandas once the data is efficiently loaded.
Pandas 2.0.0 is dropping in about 2 weeks with huge improvements
Is there gonna be more movies about FastApi or maybe some personal project? I would love to watch how you code somethings from the scratch
Awesome. I've been using datasets for a while on very large sqlite3 reads, but not with categorical types, I'll have to see how that works. Thanks.
Hey Arjan, I am a big fan of your videos. I find this one especially helpful.
Thank you, Sven, glad you found it helpful!
Just a heads up: when you want to return a certain type using the .asype(int) function, you'll get a numpy integer type such as np.int64, which many things outside of pandas don't like. You'll still have to do int(thing) to convert it back to a normal python int if your use case needs a regular int outside of pandas.
@ArjanCodes I’ve become obsessed with learning Python and your videos have been incredibly helpful on my journey. 🤓
I have a question/vid suggestion: is there a way to run a Python script within a JavaScript app and have both dynamically update each other?
Azgaar’s Fantasy Map Generator is an amazing open source Javascript app and I want to “inject” my own Python apps into it and expand its functionalities.
My goal is to utilize and build upon the data the simulator already contains. I want to add modular python systems to use the FMG data (CSVs i think) as a starting point to simulate even more detailed systems (such as a Markov Model simulating weather patterns using Biome data of map cells).
One method I think I understand is extracting CSVs from FMG and then importing it into Python and use Pandas (or something better) to parse it all.
I believe there must be a better way to create this project. I’m struggling to understand how best to solve this problem, any advice or maybe a video featuring you modding Azgaar’s Fantasy Map Generator? lol
Great tutorial specially last comparison part.
what if columns names are dynamic then which way is to handle it properly on 1M records with 20+ columns.
I will love a similar video with numpy, Is a library I use a lot and I don't know how to use typehints for example.
Noted!
it would be interesting to see the difference in memory consumption vs speed in specifying 32bit vs 64bit.
i constantly had slowdowns in numpy code because it assigns 64bit by default if not specified otherwise, but when i used 32bit data (eg: from an image) it would need to convert one of them into the same dtype, which for large arrays was costly.
So I always specify 32bit unless i need the accuracy, eg: matrix calculations, svd, polynomial fitting
Just passing by to note that it is possible to specify data types on the read methods. It is always efficient (avoid typing inference processing cycles) and sometimes it is almost necessary; e.g. on the zip codes column you would lose leading zeros if you let pandas read it as integers. You could always `str.zfill(5)` the strings latter, but is it a good design?
Just wanted to note the leading zeros of zip, and seen your comment :-)
But Pandas background is swithed to pyArrow (Pandas2.0), please make small update!!! :) Im strugling with datatype changes in pyArrow (Pandas2)
Pandas is a bit messy. I switched to Pola-rs and haven’t looked back.
Great video. Very useful. Thankyou.
You are welcome! ❤
this video is all I need now :D thanks
You are welcome!
Amazing job!! Great content!
Thanks so much!
Is there any opportunity to use pydantic when importing data from CSV files to pandas? What about storing DataFrames vs CSV files?
Maybe you didn't notice, but when retrieving data from Olist you hovered over my city here in Brazil!
Was that a signal?
Excellent
Thank you so much!
Another useful conversion is downcasting, e.g., going from int64 to int32 etc.
That cable management in the cover image 😰😱
Id love a run down on degugging pandas. I always grt lost in the debugger when working with pandas
What are the general rules of when it makes sense to convert an "object" type column into a "categorical" type?
From your example, the ZIP codes may not be the best to convert to categorical type, since there are so many ZIP codes, whereas the State column only has a few values, so it makes sense to convert it from object type to categorical.
Are there heuristics to decide when to convert to categorical? Like if the num_unique values is
Compare the num_unique values with the total count of values in the column (or with the row number). Also, have an eye on missing data, like empties, NULLs, Nones, n.a.s and similars for the start (think of it: is it more useful for the use case that it's represented as e.g. Null or a separate category). Then make the conversion and compare the memory usage with the original as in the video. It's quick and beats the heuristics ;-)
Great autocomplete when typing pip install pandas. What is it? Some oh-my-zsh settings or plugin?
It’s called Fig. I’m going to cover my Mac setup in detail in a few weeks.
can you make a video about python generic types?
Reading from csv having a column of numerical data length greater than 11 are read and saved wrongly as power digits .. how can it be resolved to read csv
Where can I give multiple thumbs up?
❤
❤
oooooo, now we're talking
Boss
idk why this channel turned into data channel istead of python.
Way too many steps for data processing. Thankfully we have databases and sql.
None of this would be any different a relational database. Sql Performance is very dependent on types
If you are not taking the same type considerations when building a database, you should not be touching a database.
polars is the fast inbetween, its kind of like a merge between sql and pandas, it reads more like sql, but you can do stuff in a pandas sort of way, and it may help you understand how its working if one know pandas. Using polars is a lot easier than working with sql in a lot of ways. For example you can select columns dynamically by type or regex etc, easier to pivot/unpivot etc, and you can build expressions from functions making repetitve stuff easier to read than it is in pure sql(since installing a lot of functions with a specific task in a database gets messy after a time).