Data Profiling 101 in Python

python

When you look at a dataset for the first time, it is always good practice to quickly assess it: size of the dataset, number of columns, type of variables, is there any empty values etc… In this post, we will go through a number of python commands that are useful in terms of quickly profiling a dataset. Let’s start.

Let’s import some packages

# Import
import pandas as pd
import numpy as np

We create a dataset to play with

# Here I am loading the Titanic Dataset saved locally on my computer
df = pd.read_csv('titanic.csv')
df.head()
PassengerId Pclass Name Sex Age SibSp Parch Ticket Fare Cabin Embarked
0 892 3 Kelly, Mr. James male 34.5 0 0 330911 7.8292 NaN Q
1 893 3 Wilkes, Mrs. James (Ellen Needs) female 47.0 1 0 363272 7.0000 NaN S
2 894 2 Myles, Mr. Thomas Francis male 62.0 0 0 240276 9.6875 NaN Q
3 895 3 Wirz, Mr. Albert male 27.0 0 0 315154 8.6625 NaN S
4 896 3 Hirvonen, Mrs. Alexander (Helga E Lindqvist) female 22.0 1 1 3101298 12.2875 NaN S

Quick data profiling

The first element of the dataset we might want to understand is its size. We can easily determine that our dataset has 418 rows and 11 columns using shape.

df.shape
(418, 11)

The info function is also very helpful. It outputs the number of columns that a dataframe contains, the column types, whether they contain null values and also how many non-nulls values each column has:

df.info()
<class 'pandas.core.frame.dataframe'="">
RangeIndex: 418 entries, 0 to 417
Data columns (total 11 columns):
PassengerId    418 non-null int64
Pclass         418 non-null int64
Name           418 non-null object
Sex            418 non-null object
Age            332 non-null float64
SibSp          418 non-null int64
Parch          418 non-null int64
Ticket         418 non-null object
Fare           417 non-null float64
Cabin          91 non-null object
Embarked       418 non-null object
dtypes: float64(2), int64(4), object(5)
memory usage: 36.0+ KB

You can also very easily determine how many null values each column contains:

df.isnull().sum()
PassengerId      0
Pclass           0
Name             0
Sex              0
Age             86
SibSp            0
Parch            0
Ticket           0
Fare             1
Cabin          327
Embarked         0
dtype: int64

If you want to find the completeness rate for each column as opposed to a count:

df.isnull().sum() / df.shape[0]
PassengerId    0.000000
Pclass         0.000000
Name           0.000000
Sex            0.000000
Age            0.205742
SibSp          0.000000
Parch          0.000000
Ticket         0.000000
Fare           0.002392
Cabin          0.782297
Embarked       0.000000
dtype: float64

Finally - and this one is my favorite. One can obtain lots of statistics for each numerical columns using the describe function:

df.describe()
PassengerId Pclass Age SibSp Parch Fare
count 418.000000 418.000000 332.000000 418.000000 418.000000 417.000000
mean 1100.500000 2.265550 30.272590 0.447368 0.392344 35.627188
std 120.810458 0.841838 14.181209 0.896760 0.981429 55.907576
min 892.000000 1.000000 0.170000 0.000000 0.000000 0.000000
25% 996.250000 1.000000 21.000000 0.000000 0.000000 7.895800
50% 1100.500000 3.000000 27.000000 0.000000 0.000000 14.454200
75% 1204.750000 3.000000 39.000000 1.000000 0.000000 31.500000
max 1309.000000 3.000000 76.000000 8.000000 9.000000 512.329200

Et voila!

Share this post: