Using The Pandas Category Data Type (2024)

Using The Pandas Category DataType (1)

Introduction

In my previous article, I wrote about pandas data types; what they areand how to convert data to the appropriate type. This article will focus on the pandascategorical data type and some of the benefits and drawbacks of usingit.

Pandas Category DataType

To refresh your memory, here is a summary table of the various pandas data types(akadtypes).

Pandas dtype mapping
Pandas dtypePython typeNumPy typeUsage
objectstrstring_, unicode_Text
int64intint_, int8, int16, int32, int64, uint8, uint16, uint32, uint64Integer numbers
float64floatfloat_, float16, float32, float64Floating point numbers
boolboolbool_True/False values
datetime64NAdatetime64[ns]Date and time values
timedelta[ns]NANADifferences between two datetimes
categoryNANAFinite list of text values

This article will focus on categorical data. As a quick refresher, categorical data isdata which takes on a finite number of possible values. For example, if wewere talking about a physical product like a t-shirt, it could have categoricalvariables suchas:

  • Size (X-Small, Small, Medium, Large,X-Large)
  • Color (Red, Black,White)
  • Style (Short sleeve, longsleeve)
  • Material (Cotton,Polyester)

Attributes such as cost, price, quantity are typically integers orfloats.

The key take away is that whether or not a variable is categorical depends on itsapplication. Since we only have 3 colors of shirts, then that is a good categoricalvariable. However, “color” could represent thousands of values in other situationsso it would not be a goodchoice.

There is no hard and fast rule for how many values a categorical value should have.You should apply your domain knowledge to make that determination on your own data sets.In this article, we will look at one approach for identifying categoricalvalues.

The category data type in pandas is a hybrid data type. It looks and behaves like astring in many instances but internally is represented by an array of integers.This allows the data to be sorted in a custom order and to more efficiently storethedata.

At the end of the day why do we care about using categorical values? There are 3 mainreasons:

  • We can define a custom sort order which can improve summarizing and reporting the data.In the example above, “X-Small” < “Small” < “Medium” < “Large” < “X-Large”.Alphabetical sorting would not be able to reproduce thatorder.
  • Some of the python visualization libraries can interpret the categorical data type to apply approrpiate statistical models or plottypes.
  • Categorical data uses less memory which can lead to performanceimprovements.

While categorical data is very handy in pandas. It is not necessary for every type of analysis.In fact, there can be some edge cases where defining a column of data as categorical thenmanipulating the dataframe can lead to some surprising results. Care must be taken tounderstand the data set and the necessary analysis before converting columns tocategorical datatypes.

DataPreparation

One of the main use cases for categorical data types is more efficient memory usage.In order to demonstrate, we will use a large data set from the US Centers for Medicare and Medicaid Services.This data set includes a 500MB+ csv file that has information about research paymentsto doctors and hospital in fiscal year2017.

First, set up imports and read in all thedata:

import pandas as pdfrom pandas.api.types import CategoricalDtypedf_raw = pd.read_csv('OP_DTL_RSRCH_PGYR2017_P06292018.csv', low_memory=False)

I have included the low_memory=False parameter in order to surpressthiswarning:

interactiveshell.py:2728: DtypeWarning: Columns (..) have mixed types. Specify dtype option on import or set low_memory=False.interactivity=interactivity, compiler=compiler, result=result)

Feel free to read more about this parameter in the pandas read_csvdocumentation.

One interesting thing about this data set is that it has over 176 columns but many of themare empty. I found a stack overflow solution to quickly drop all the columns where atleast 90% of the data is empty. I thought this might be handy for others aswell.

Let’s take a look at the size of these various dataframes. Here is the original dataset:

df_raw.info()
<class 'pandas.core.frame.DataFrame'>RangeIndex: 607865 entries, 0 to 607864Columns: 176 entries, Change_Type to Context_of_Researchdtypes: float64(34), int64(3), object(139)memory usage: 816.2+ MB

The 500MB csv file fills about 816MB of memory. This seems large but even a low-endlaptop has several gigabytes of RAM so we are nowhere near the need for specializedprocessingtools.

Here is the data set we will use for the rest of thearticle:

df.info()
<class 'pandas.core.frame.DataFrame'>RangeIndex: 607865 entries, 0 to 607864Data columns (total 33 columns):Change_Type 607865 non-null objectCovered_Recipient_Type 607865 non-null object.....Payment_Publication_Date 607865 non-null objectdtypes: float64(2), int64(3), object(28)memory usage: 153.0+ MB

Now that we only have 33 columns, taking 153MB of memory, let’s take a look at whichcolumns might be good candidates for a categorical datatype.

In order to make this a little easier, I created a small helper function to createa dataframe showing all the unique values in acolumn.

unique_counts = pd.DataFrame.from_records([(col, df[col].nunique()) for col in df.columns], columns=['Column_Name', 'Num_Unique']).sort_values(by=['Num_Unique'])
Column_NameNum_Unique
0Change_Type1
27Delay_in_Publication_Indicator1
31Program_Year1
32Payment_Publication_Date1
29Dispute_Status_for_Publication2
26Preclinical_Research_Indicator2
22Related_Product_Indicator2
25Form_of_Payment_or_Transfer_of_Value3
1Covered_Recipient_Type4
14Principal_Investigator_1_Country4
15Principal_Investigator_1_Primary_Type6
6Recipient_Country9
21Applicable_Manufacturer_or_Applicable_GPO_Maki…20
4Recipient_State53
12Principal_Investigator_1_State54
17Principal_Investigator_1_License_State_code154
16Principal_Investigator_1_Specialty243
24Date_of_Payment365
18Submitting_Applicable_Manufacturer_or_Applicab…478
19Applicable_Manufacturer_or_Applicable_GPO_Maki…551
20Applicable_Manufacturer_or_Applicable_GPO_Maki…557
11Principal_Investigator_1_City4101
3Recipient_City4277
8Principal_Investigator_1_First_Name8300
5Recipient_Zip_Code12826
28Name_of_Study13015
13Principal_Investigator_1_Zip_Code13733
9Principal_Investigator_1_Last_Name21420
10Principal_Investigator_1_Business_Street_Addre…29026
7Principal_Investigator_1_Profile_ID29696
2Recipient_Primary_Business_Street_Address_Line138254
23Total_Amount_of_Payment_USDollars141959
30Record_ID607865

This table highlights a couple of items that will help determine which values should becategorical. First, there is a big jump in unique values once we get above 557 uniquevalues. This should be a useful threshold for this dataset.

In addition, the date fields should not be converted tocategorical.

The simplest way to convert a column to a categorical type is to useastype('category'). We can use a loop to convert all the columns we careabout using astype('category')

cols_to_exclude = ['Program_Year', 'Date_of_Payment', 'Payment_Publication_Date']for col in df.columns: if df[col].nunique() < 600 and col not in cols_to_exclude: df[col] = df[col].astype('category')

If we use df.info() to look at the memory usage, we have taken the 153 MB dataframedown to 82.4 MB. This is pretty impressive. We have cut the memory usage almost in halfjust by converting to categorical values for the majority of ourcolumns.

There is one other feature we can use with categorical data - defining a custom order.To illustrate, let’s do a quick summary of the total payments made by the form ofpayment:

df.groupby('Covered_Recipient_Type')['Total_Amount_of_Payment_USDollars'].sum().to_frame()
Total_Amount_of_Payment_USDollars
Covered_Recipient_Type
Covered Recipient Physician7.912815e+07
Covered Recipient Teaching Hospital1.040372e+09
Non-covered Recipient Entity3.536595e+09
Non-covered Recipient Individual2.832901e+06

If we want to change the order of the Covered_Recipient_Type, we need todefine a custom CategoricalDtype:

cats_to_order = ["Non-covered Recipient Entity", "Covered Recipient Teaching Hospital", "Covered Recipient Physician", "Non-covered Recipient Individual"]covered_type = CategoricalDtype(categories=cats_to_order, ordered=True)

Then, explicitly re_order thecategory:

df['Covered_Recipient_Type'] = df['Covered_Recipient_Type'].cat.reorder_categories(cats_to_order, ordered=True)

Now, we can see the sort order in effect with thegroupby:

df.groupby('Covered_Recipient_Type')['Total_Amount_of_Payment_USDollars'].sum().to_frame()
Total_Amount_of_Payment_USDollars
Covered_Recipient_Type
Non-covered Recipient Entity3.536595e+09
Covered Recipient Teaching Hospital1.040372e+09
Covered Recipient Physician7.912815e+07
Non-covered Recipient Individual2.832901e+06

If you have this same type of data file that you will be processing repeatedly,you can specify this conversion when reading the csv by passing a dictionary ofcolumn names and types via the dtype:parameter.

df_raw_2 = pd.read_csv('OP_DTL_RSRCH_PGYR2017_P06292018.csv', dtype={'Covered_Recipient_Type':covered_type})

Performance

We’ve shown that the size of the dataframe is reduced by converting values to categoricaldata types. Does this impact other areas of performance? The answer isyes.

Here is an example of a groupby operation on the categorical vs. object data types.First, perform the analysis on the original inputdataframe.

%%timeitdf_raw.groupby('Covered_Recipient_Type')['Total_Amount_of_Payment_USDollars'].sum().to_frame()
40.3 ms ± 2.38 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Now, on the dataframe with categoricaldata:

%%timeitdf.groupby('Covered_Recipient_Type')['Total_Amount_of_Payment_USDollars'].sum().to_frame()
4.51 ms ± 96.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In this case we sped up the code by 10x, going from 40.3 ms to 4.51 ms. You can imaginethat on much larger data sets, the speedup could be evengreater.

WatchOuts

Using The Pandas Category DataType (2)

Photo credit: Frans VanHeerden

Categorical data seems pretty nifty. It saves memory and speeds up code, so why notuse it everywhere? Well, Donald Knuth is correct when he warns about prematureoptimization:

The real problem is that programmers have spent far too much time worrying aboutefficiency in the wrong places and at the wrong times; premature optimizationis the root of all evil (or at least most of it) in programming.

In the examples above, the code is faster but it really does not matter when itis used for quick summary actions that are run infrequently. In addition, all the workto figure out and convert to categorical data is probably not worth it for thisdata set and this simpleanalysis.

In addition, categorical data can yield some surprising behaviors in real worldusage. The examples below will illustrate a couple ofissues.

Let’s build a simple dataframe with one ordered categorical variable that representsthe status of the customer. This trivial example will highlight some potentialsubtle errors when dealing with categorical values. It is worth noting that thisexample shows how to use astype() to convert to the ordered category inone step instead of the two step process usedearlier.

import pandas as pdfrom pandas.api.types import CategoricalDtypesales_1 = [{'account': 'Jones LLC', 'Status': 'Gold', 'Jan': 150, 'Feb': 200, 'Mar': 140}, {'account': 'Alpha Co', 'Status': 'Gold', 'Jan': 200, 'Feb': 210, 'Mar': 215}, {'account': 'Blue Inc', 'Status': 'Silver', 'Jan': 50, 'Feb': 90, 'Mar': 95 }]df_1 = pd.DataFrame(sales_1)status_type = CategoricalDtype(categories=['Silver', 'Gold'], ordered=True)df_1['Status'] = df_1['Status'].astype(status_type)

This yields a simple dataframe that looks likethis:

FebJanMarStatusaccount
0200150140GoldJones LLC
1210200215GoldAlpha Co
2905095SilverBlue Inc

We can inspect the categorical column in moredetail:

df_1['Status']
0 Gold1 Gold2 SilverName: Status, dtype: categoryCategories (2, object): [Silver < Gold]

All looks good. We see the data is all there and that Gold is > thenSilver.

Now, let’s bring in another dataframe and apply the same category to the statuscolumn:

sales_2 = [{'account': 'Smith Co', 'Status': 'Silver', 'Jan': 100, 'Feb': 100, 'Mar': 70}, {'account': 'Bingo', 'Status': 'Bronze', 'Jan': 310, 'Feb': 65, 'Mar': 80}]df_2 = pd.DataFrame(sales_2)df_2['Status'] = df_2['Status'].astype(status_type)
FebJanMarStatusaccount
010010070SilverSmith Co
16531080NaNBingo

Hmm. Something happened to our status. If we just look at the column in moredetail:

df_2['Status']
0 Silver1 NaNName: Status, dtype: categoryCategories (2, object): [Silver < Gold]

We can see that since we did not define “Bronze” as a valid status, we end upwith an NaN value. Pandas does this for a perfectly good reason. It assumesthat you have defined all of the valid categories and in this case, “Bronze” is notvalid. You can just imagine how confusing this issue could be to troubleshoot ifyou were not looking out forit.

This scenario is relatively easy to see but what would you do if you had 100’s of valuesand the data was not cleaned and normalizedproperly?

Here’s another tricky example where you can “lose” the categoryobject:

sales_1 = [{'account': 'Jones LLC', 'Status': 'Gold', 'Jan': 150, 'Feb': 200, 'Mar': 140}, {'account': 'Alpha Co', 'Status': 'Gold', 'Jan': 200, 'Feb': 210, 'Mar': 215}, {'account': 'Blue Inc', 'Status': 'Silver', 'Jan': 50, 'Feb': 90, 'Mar': 95 }]df_1 = pd.DataFrame(sales_1)# Define an unordered categorydf_1['Status'] = df_1['Status'].astype('category')sales_2 = [{'account': 'Smith Co', 'Status': 'Silver', 'Jan': 100, 'Feb': 100, 'Mar': 70}, {'account': 'Bingo', 'Status': 'Bronze', 'Jan': 310, 'Feb': 65, 'Mar': 80}]df_2 = pd.DataFrame(sales_2)df_2['Status'] = df_2['Status'].astype('category')# Combine the two dataframes into 1df_combined = pd.concat([df_1, df_2])
FebJanMarStatusaccount
0200150140GoldJones LLC
1210200215GoldAlpha Co
2905095SilverBlue Inc
010010070SilverSmith Co
16531080BronzeBingo

Everything looks ok but upon further inspection, we’ve lost our category datatype:

df_combined['Status']
0 Gold1 Gold2 Silver0 Silver1 BronzeName: Status, dtype: object

In this case, the data is still there but the type has been converted to an object.Once again, this is pandas attempt to combine the data without throwing errors but notmaking assumptions. If you want to convert to a category data type now, you can useastype('category').

GeneralGuidelines

Now that you know about these gotchas, you can watch out for them. But I will givea few guidelines for how I recommend using categorical datatypes:

  1. Do not assume you need to convert all categorical data to the pandas category datatype.
  2. If the data set starts to approach an appreciable percentage of your useable memory, then consider using categorical datatypes.
  3. If you have very significant performance concerns with operations that are executed frequently, lookat using categoricaldata.
  4. If you are using categorical data, add some checks to make sure the data is clean and completebefore converting to the pandas category type. Additionally, check for NaN values aftercombining or convertingdataframes.

I hope this article was helpful. Categorical data types in pandas can be very useful.However, there are a few issues that you need to keep an eye out for so that you do notget tripped up in subsequent processing. Feel free to add any additional tips orquestions in the comments sectionbelow.

Changes

  • 6-Dec-2020: Fix typo in groupby example
Using The Pandas Category Data Type (2024)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Patricia Veum II

Last Updated:

Views: 5884

Rating: 4.3 / 5 (64 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Patricia Veum II

Birthday: 1994-12-16

Address: 2064 Little Summit, Goldieton, MS 97651-0862

Phone: +6873952696715

Job: Principal Officer

Hobby: Rafting, Cabaret, Candle making, Jigsaw puzzles, Inline skating, Magic, Graffiti

Introduction: My name is Patricia Veum II, I am a vast, combative, smiling, famous, inexpensive, zealous, sparkling person who loves writing and wants to share my knowledge and understanding with you.