Module preprocessing (0.2.0)

Transformers that prepare data for other estimators. This module is styled after Scikit-Learn's preprocessing module: https://scikit-learn.org/stable/modules/preprocessing.html.

Classes

OneHotEncoder

OneHotEncoder(
    drop: typing.Optional[typing.Literal["most_frequent"]] = None,
    min_frequency: typing.Optional[int] = None,
    max_categories: typing.Optional[int] = None,
)

Encode categorical features as a one-hot format.

The input to this transformer should be an array-like of integers or strings, denoting the values taken on by categorical (discrete) features. The features are encoded using a one-hot (aka 'one-of-K' or 'dummy') encoding scheme.

Note that this method deviates from Scikit-Learn; instead of producing sparse binary columns, the encoding is a single column of STRUCT<index INT64, value DOUBLE>.

Parameters
NameDescription
drop Optional[Literal["most_frequent"]], default None

Specifies a methodology to use to drop one of the categories per feature. This is useful in situations where perfectly collinear features cause problems, such as when feeding the resulting data into an unregularized linear regression model. However, dropping one category breaks the symmetry of the original representation and can therefore induce a bias in downstream models, for instance for penalized linear classification or regression models. Default None: retain all the categories. "most_frequent": Drop the most frequent category found in the string expression. Selecting this value causes the function to use dummy encoding.

min_frequency Optional[int], default None

Specifies the minimum frequency below which a category will be considered infrequent. Default None. int: categories with a smaller cardinality will be considered infrequent as index 0.

max_categories Optional[int], default None

Specifies an upper limit to the number of output features for each input feature when considering infrequent categories. If there are infrequent categories, max_categories includes the category representing the infrequent categories along with the frequent categories. Default None, set limit to 1,000,000.

StandardScaler

StandardScaler()

Standardize features by removing the mean and scaling to unit variance.

The standard score of a sample x is calculated as:z = (x - u) / s where u is the mean of the training samples or zero if with_mean=False, and s is the standard deviation of the training samples or one if with_std=False.

Centering and scaling happen independently on each feature by computing the relevant statistics on the samples in the training set. Mean and standard deviation are then stored to be used on later data using transform.

Standardization of a dataset is a common requirement for many machine learning estimators: they might behave badly if the individual features do not more or less look like standard normally distributed data (e.g. Gaussian with 0 mean and unit variance).