This notebook contains an excerpt from the Python Data Science Handbook by Jake VanderPlas; the content is available on GitHub.
The text is released under the CC-BY-NC-ND license, and code is released under the MIT license. If you find this content useful, please consider supporting the work by buying the book!
Feature Engineering¶
The previous sections outline the fundamental ideas of machine learning, but all of the examples assume that you have numerical data in a tidy, [n_samples, n_features]
format. In the real world, data rarely comes in such a form. With this in mind, one of the more important steps in using machine learning in practice is feature engineering: that is, taking whatever information you have about your problem and turning it into numbers that you can use to build your feature matrix.
In this section, we will cover a few common examples of feature engineering tasks: features for representing categorical data, features for representing text, and features for representing images. Additionally, we will discuss derived features for increasing model complexity and imputation of missing data. Often this process is known as vectorization, as it involves converting arbitrary data into well-behaved vectors.
Categorical Features¶
One common type of non-numerical data is categorical data. For example, imagine you are exploring some data on housing prices, and along with numerical features like "price" and "rooms", you also have "neighborhood" information. For example, your data might look something like this:
data = [
{'price': 850000, 'rooms': 4, 'neighborhood': 'Queen Anne'},
{'price': 700000, 'rooms': 3, 'neighborhood': 'Fremont'},
{'price': 650000, 'rooms': 3, 'neighborhood': 'Wallingford'},
{'price': 600000, 'rooms': 2, 'neighborhood': 'Fremont'}
]
You might be tempted to encode this data with a straightforward numerical mapping:
{'Queen Anne': 1, 'Fremont': 2, 'Wallingford': 3};
It turns out that this is not generally a useful approach in Scikit-Learn: the package's models make the fundamental assumption that numerical features reflect algebraic quantities. Thus such a mapping would imply, for example, that Queen Anne < Fremont < Wallingford, or even that Wallingford - Queen Anne = Fremont, which (niche demographic jokes aside) does not make much sense.
In this case, one proven technique is to use one-hot encoding, which effectively creates extra columns indicating the presence or absence of a category with a value of 1 or 0, respectively. When your data comes as a list of dictionaries, Scikit-Learn's DictVectorizer
will do this for you:
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer(sparse=False, dtype=int)
vec.fit_transform(data)
array([[ 0, 1, 0, 850000, 4], [ 1, 0, 0, 700000, 3], [ 0, 0, 1, 650000, 3], [ 1, 0, 0, 600000, 2]])
Notice that the 'neighborhood' column has been expanded into three separate columns, representing the three neighborhood labels, and that each row has a 1 in the column associated with its neighborhood. With these categorical features thus encoded, you can proceed as normal with fitting a Scikit-Learn model.
To see the meaning of each column, you can inspect the feature names:
vec.get_feature_names()
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[4], line 1 ----> 1 vec.get_feature_names() AttributeError: 'DictVectorizer' object has no attribute 'get_feature_names'
There is one clear disadvantage of this approach: if your category has many possible values, this can greatly increase the size of your dataset. However, because the encoded data contains mostly zeros, a sparse output can be a very efficient solution:
vec = DictVectorizer(sparse=True, dtype=int)
vec.fit_transform(data)
<4x5 sparse matrix of type '<class 'numpy.int64'>' with 12 stored elements in Compressed Sparse Row format>
Many (though not yet all) of the Scikit-Learn estimators accept such sparse inputs when fitting and evaluating models. sklearn.preprocessing.OneHotEncoder
and sklearn.feature_extraction.FeatureHasher
are two additional tools that Scikit-Learn includes to support this type of encoding.
Text Features¶
Another common need in feature engineering is to convert text to a set of representative numerical values. For example, most automatic mining of social media data relies on some form of encoding the text as numbers. One of the simplest methods of encoding data is by word counts: you take each snippet of text, count the occurrences of each word within it, and put the results in a table.
For example, consider the following set of three phrases:
sample = ['problem of evil',
'evil queen',
'horizon problem']
For a vectorization of this data based on word count, we could construct a column representing the word "problem," the word "evil," the word "horizon," and so on. While doing this by hand would be possible, the tedium can be avoided by using Scikit-Learn's CountVectorizer
:
from sklearn.feature_extraction.text import CountVectorizer
vec = CountVectorizer()
X = vec.fit_transform(sample)
X
<3x5 sparse matrix of type '<class 'numpy.int64'>' with 7 stored elements in Compressed Sparse Row format>
The result is a sparse matrix recording the number of times each word appears; it is easier to inspect if we convert this to a DataFrame
with labeled columns:
import pandas as pd
pd.DataFrame(X.toarray(), columns=vec.get_feature_names())
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[8], line 2 1 import pandas as pd ----> 2 pd.DataFrame(X.toarray(), columns=vec.get_feature_names()) AttributeError: 'CountVectorizer' object has no attribute 'get_feature_names'
There are some issues with this approach, however: the raw word counts lead to features which put too much weight on words that appear very frequently, and this can be sub-optimal in some classification algorithms. One approach to fix this is known as term frequency-inverse document frequency (TF–IDF) which weights the word counts by a measure of how often they appear in the documents. The syntax for computing these features is similar to the previous example:
from sklearn.feature_extraction.text import TfidfVectorizer
vec = TfidfVectorizer()
X = vec.fit_transform(sample)
pd.DataFrame(X.toarray(), columns=vec.get_feature_names())
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[9], line 4 2 vec = TfidfVectorizer() 3 X = vec.fit_transform(sample) ----> 4 pd.DataFrame(X.toarray(), columns=vec.get_feature_names()) AttributeError: 'TfidfVectorizer' object has no attribute 'get_feature_names'
For an example of using TF-IDF in a classification problem, see In Depth: Naive Bayes Classification.
Image Features¶
Another common need is to suitably encode images for machine learning analysis. The simplest approach is what we used for the digits data in Introducing Scikit-Learn: simply using the pixel values themselves. But depending on the application, such approaches may not be optimal.
A comprehensive summary of feature extraction techniques for images is well beyond the scope of this section, but you can find excellent implementations of many of the standard approaches in the Scikit-Image project. For one example of using Scikit-Learn and Scikit-Image together, see Feature Engineering: Working with Images.
Derived Features¶
Another useful type of feature is one that is mathematically derived from some input features. We saw an example of this in Hyperparameters and Model Validation when we constructed polynomial features from our input data. We saw that we could convert a linear regression into a polynomial regression not by changing the model, but by transforming the input! This is sometimes known as basis function regression, and is explored further in In Depth: Linear Regression.
For example, this data clearly cannot be well described by a straight line:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
x = np.array([1, 2, 3, 4, 5])
y = np.array([4, 2, 1, 3, 7])
plt.scatter(x, y);
Still, we can fit a line to the data using LinearRegression
and get the optimal result:
from sklearn.linear_model import LinearRegression
X = x[:, np.newaxis]
model = LinearRegression().fit(X, y)
yfit = model.predict(X)
plt.scatter(x, y)
plt.plot(x, yfit);
It's clear that we need a more sophisticated model to describe the relationship between $x$ and $y$.
One approach to this is to transform the data, adding extra columns of features to drive more flexibility in the model. For example, we can add polynomial features to the data this way:
from sklearn.preprocessing import PolynomialFeatures
poly = PolynomialFeatures(degree=3, include_bias=False)
X2 = poly.fit_transform(X)
print(X2)
[[ 1. 1. 1.] [ 2. 4. 8.] [ 3. 9. 27.] [ 4. 16. 64.] [ 5. 25. 125.]]
The derived feature matrix has one column representing $x$, and a second column representing $x^2$, and a third column representing $x^3$. Computing a linear regression on this expanded input gives a much closer fit to our data:
model = LinearRegression().fit(X2, y)
yfit = model.predict(X2)
plt.scatter(x, y)
plt.plot(x, yfit);
This idea of improving a model not by changing the model, but by transforming the inputs, is fundamental to many of the more powerful machine learning methods. We explore this idea further in In Depth: Linear Regression in the context of basis function regression. More generally, this is one motivational path to the powerful set of techniques known as kernel methods, which we will explore in In-Depth: Support Vector Machines.
Imputation of Missing Data¶
Another common need in feature engineering is handling of missing data. We discussed the handling of missing data in DataFrame
s in Handling Missing Data, and saw that often the NaN
value is used to mark missing values. For example, we might have a dataset that looks like this:
from numpy import nan
X = np.array([[ nan, 0, 3 ],
[ 3, 7, 9 ],
[ 3, 5, 2 ],
[ 4, nan, 6 ],
[ 8, 8, 1 ]])
y = np.array([14, 16, -1, 8, -5])
When applying a typical machine learning model to such data, we will need to first replace such missing data with some appropriate fill value. This is known as imputation of missing values, and strategies range from simple (e.g., replacing missing values with the mean of the column) to sophisticated (e.g., using matrix completion or a robust model to handle such data).
The sophisticated approaches tend to be very application-specific, and we won't dive into them here. For a baseline imputation approach, using the mean, median, or most frequent value, Scikit-Learn provides the Imputer
class:
from sklearn.preprocessing import Imputer
imp = Imputer(strategy='mean')
X2 = imp.fit_transform(X)
X2
--------------------------------------------------------------------------- ImportError Traceback (most recent call last) Cell In[15], line 1 ----> 1 from sklearn.preprocessing import Imputer 2 imp = Imputer(strategy='mean') 3 X2 = imp.fit_transform(X) ImportError: cannot import name 'Imputer' from 'sklearn.preprocessing' (/opt/conda/lib/python3.10/site-packages/sklearn/preprocessing/__init__.py)
We see that in the resulting data, the two missing values have been replaced with the mean of the remaining values in the column. This imputed data can then be fed directly into, for example, a LinearRegression
estimator:
model = LinearRegression().fit(X2, y)
model.predict(X2)
array([15.32857143, 10.68571429, 6.97142857, 2.68571429, -3.67142857])
Feature Pipelines¶
With any of the preceding examples, it can quickly become tedious to do the transformations by hand, especially if you wish to string together multiple steps. For example, we might want a processing pipeline that looks something like this:
- Impute missing values using the mean
- Transform features to quadratic
- Fit a linear regression
To streamline this type of processing pipeline, Scikit-Learn provides a Pipeline
object, which can be used as follows:
from sklearn.pipeline import make_pipeline
model = make_pipeline(Imputer(strategy='mean'),
PolynomialFeatures(degree=2),
LinearRegression())
--------------------------------------------------------------------------- NameError Traceback (most recent call last) Cell In[17], line 3 1 from sklearn.pipeline import make_pipeline ----> 3 model = make_pipeline(Imputer(strategy='mean'), 4 PolynomialFeatures(degree=2), 5 LinearRegression()) NameError: name 'Imputer' is not defined
This pipeline looks and acts like a standard Scikit-Learn object, and will apply all the specified steps to any input data.
model.fit(X, y) # X with missing values, from above
print(y)
print(model.predict(X))
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) Cell In[18], line 1 ----> 1 model.fit(X, y) # X with missing values, from above 2 print(y) 3 print(model.predict(X)) File /opt/conda/lib/python3.10/site-packages/sklearn/linear_model/_base.py:648, in LinearRegression.fit(self, X, y, sample_weight) 644 n_jobs_ = self.n_jobs 646 accept_sparse = False if self.positive else ["csr", "csc", "coo"] --> 648 X, y = self._validate_data( 649 X, y, accept_sparse=accept_sparse, y_numeric=True, multi_output=True 650 ) 652 sample_weight = _check_sample_weight( 653 sample_weight, X, dtype=X.dtype, only_non_negative=True 654 ) 656 X, y, X_offset, y_offset, X_scale = _preprocess_data( 657 X, 658 y, (...) 661 sample_weight=sample_weight, 662 ) File /opt/conda/lib/python3.10/site-packages/sklearn/base.py:565, in BaseEstimator._validate_data(self, X, y, reset, validate_separately, **check_params) 563 y = check_array(y, input_name="y", **check_y_params) 564 else: --> 565 X, y = check_X_y(X, y, **check_params) 566 out = X, y 568 if not no_val_X and check_params.get("ensure_2d", True): File /opt/conda/lib/python3.10/site-packages/sklearn/utils/validation.py:1106, in check_X_y(X, y, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, estimator) 1101 estimator_name = _check_estimator_name(estimator) 1102 raise ValueError( 1103 f"{estimator_name} requires y to be passed, but the target y is None" 1104 ) -> 1106 X = check_array( 1107 X, 1108 accept_sparse=accept_sparse, 1109 accept_large_sparse=accept_large_sparse, 1110 dtype=dtype, 1111 order=order, 1112 copy=copy, 1113 force_all_finite=force_all_finite, 1114 ensure_2d=ensure_2d, 1115 allow_nd=allow_nd, 1116 ensure_min_samples=ensure_min_samples, 1117 ensure_min_features=ensure_min_features, 1118 estimator=estimator, 1119 input_name="X", 1120 ) 1122 y = _check_y(y, multi_output=multi_output, y_numeric=y_numeric, estimator=estimator) 1124 check_consistent_length(X, y) File /opt/conda/lib/python3.10/site-packages/sklearn/utils/validation.py:921, in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator, input_name) 915 raise ValueError( 916 "Found array with dim %d. %s expected <= 2." 917 % (array.ndim, estimator_name) 918 ) 920 if force_all_finite: --> 921 _assert_all_finite( 922 array, 923 input_name=input_name, 924 estimator_name=estimator_name, 925 allow_nan=force_all_finite == "allow-nan", 926 ) 928 if ensure_min_samples > 0: 929 n_samples = _num_samples(array) File /opt/conda/lib/python3.10/site-packages/sklearn/utils/validation.py:161, in _assert_all_finite(X, allow_nan, msg_dtype, estimator_name, input_name) 144 if estimator_name and input_name == "X" and has_nan_error: 145 # Improve the error message on how to handle missing values in 146 # scikit-learn. 147 msg_err += ( 148 f"\n{estimator_name} does not accept missing values" 149 " encoded as NaN natively. For supervised learning, you might want" (...) 159 "#estimators-that-handle-nan-values" 160 ) --> 161 raise ValueError(msg_err) ValueError: Input X contains NaN. LinearRegression does not accept missing values encoded as NaN natively. For supervised learning, you might want to consider sklearn.ensemble.HistGradientBoostingClassifier and Regressor which accept missing values encoded as NaNs natively. Alternatively, it is possible to preprocess the data, for instance by using an imputer transformer in a pipeline or drop samples with missing values. See https://scikit-learn.org/stable/modules/impute.html You can find a list of all estimators that handle NaN values at the following page: https://scikit-learn.org/stable/modules/impute.html#estimators-that-handle-nan-values
All the steps of the model are applied automatically. Notice that for the simplicity of this demonstration, we've applied the model to the data it was trained on; this is why it was able to perfectly predict the result (refer back to Hyperparameters and Model Validation for further discussion of this).
For some examples of Scikit-Learn pipelines in action, see the following section on naive Bayes classification, as well as In Depth: Linear Regression, and In-Depth: Support Vector Machines.