Using Dask EntitySets (BETA)#

Note

Support for Dask EntitySets is still in Beta. While the key functionality has been implemented, development is ongoing to add the remaining functionality.

All planned improvements to the Featuretools/Dask integration are documented on Github. If you see an open issue that is important for your application, please let us know by upvoting or commenting on the issue. If you encounter any errors using Dask dataframes in EntitySets, or find missing functionality that does not yet have an open issue, please create a new issue on Github.

Creating a feature matrix from a very large dataset can be problematic if the underlying pandas dataframes that make up the EntitySet cannot easily fit in memory. To help get around this issue, Featuretools supports creating EntitySet objects from Dask dataframes. A Dask EntitySet can then be passed to featuretools.dfs or featuretools.calculate_feature_matrix to create a feature matrix, which will be returned as a Dask dataframe. In addition to working on larger than memory datasets, this approach also allows users to take advantage of the parallel and distributed processing capabilities offered by Dask.

This guide will provide an overview of how to create a Dask EntitySet and then generate a feature matrix from it. If you are already familiar with creating a feature matrix starting from pandas DataFrames, this process will seem quite familiar, as there are no differences in the process. There are, however, some limitations when using Dask dataframes, and those limitations are reviewed in more detail below.

Creating EntitySets#

For this example, we will create a very small pandas DataFrame and then convert this into a Dask DataFrame to use in the remainder of the process. Normally when using Dask, you would just read your data directly into a Dask DataFrame without the intermediate step of using pandas.

[1]:
import dask.dataframe as dd
import pandas as pd

import featuretools as ft

id = [0, 1, 2, 3, 4]
values = [12, -35, 14, 103, -51]
df = pd.DataFrame({"id": id, "values": values})
dask_df = dd.from_pandas(df, npartitions=2)

dask_df
/tmp/ipykernel_4307/3384361556.py:1: DeprecationWarning: The current Dask DataFrame implementation is deprecated.
In a future release, Dask DataFrame will use a new implementation that
contains several improvements including a logical query planning.
The user-facing DataFrame API will remain unchanged.

The new implementation is already available and can be enabled by
installing the dask-expr library:

    $ pip install dask-expr

and turning the query planning option on:

    >>> import dask
    >>> dask.config.set({'dataframe.query-planning': True})
    >>> import dask.dataframe as dd

API documentation for the new implementation is available at
https://docs.dask.org/en/stable/dask-expr-api.html

Any feedback can be reported on the Dask issue tracker
https://github.com/dask/dask/issues

To disable this warning in the future, set dask config:

    # via Python
    >>> dask.config.set({'dataframe.query-planning-warning': False})

    # via CLI
    dask config set dataframe.query-planning-warning False


  import dask.dataframe as dd
[1]:
Dask DataFrame Structure:
id values
npartitions=2
0 int64 int64
3 ... ...
4 ... ...
Dask Name: from_pandas, 1 graph layer

Now that we have our Dask DataFrame, we can start to create the EntitySet. Inferring Woodwork logical types for the columns in a Dask dataframe can be computationally expensive. To avoid this expense, logical type inference can be skipped by supplying a dictionary of logical types using the logical_types parameter when calling es.add_dataframe(). Logical types can be specified as Woodwork LogicalType classes, or their equivalent string representation. For more information refer to the Woodwork Typing in Featuretools guide.

Aside from supplying the logical types, the rest of the process of creating an EntitySet is the same as if we were using pandas DataFrames.

[2]:
from woodwork.logical_types import Double, Integer

es = ft.EntitySet(id="dask_es")
es = es.add_dataframe(
    dataframe_name="dask_input_df",
    dataframe=dask_df,
    index="id",
    logical_types={"id": Integer, "values": Double},
)

es
[2]:
Entityset: dask_es
  DataFrames:
    dask_input_df [Rows: Delayed('int-30e8999b-2f50-43af-a057-a0f368fca99e'), Columns: 2]
  Relationships:
    No relationships

Notice that when we print our EntitySet, the number of rows for the DataFrame named dask_input_df is returned as a Dask Delayed object. This is because obtaining the length of a Dask DataFrame may require an expensive compute operation to sum up the lengths of all the individual partitions that make up the DataFrame and that operation is not performed by default.

Running DFS#

We can pass the EntitySet we created above to featuretools.dfs in order to create a feature matrix. If the EntitySet we pass to dfs is made of Dask DataFrames, the feature matrix we get back will be a Dask DataFrame.

[3]:
feature_matrix, features = ft.dfs(
    entityset=es,
    target_dataframe_name="dask_input_df",
    trans_primitives=["negate"],
    max_depth=1,
)
feature_matrix
[3]:
Dask DataFrame Structure:
values -(values) id
npartitions=2
0 float64 float64 Int64
3 ... ... ...
4 ... ... ...
Dask Name: assign, 7 graph layers

This feature matrix can be saved to disk or computed and brought into memory, using the appropriate Dask DataFrame methods.

[4]:
fm_computed = feature_matrix.compute()
fm_computed
[4]:
values -(values) id
0 12.0 -12.0 0
1 -35.0 35.0 1
2 14.0 -14.0 2
3 103.0 -103.0 3
4 -51.0 51.0 4

While this is a simple example to illustrate the process of using Dask DataFrames with Featuretools, this process will also work with an EntitySet containing multiple dataframes, as well as with aggregation primitives.

Limitations#

The key functionality of Featuretools is available for use with a Dask EntitySet, and work is ongoing to add the remaining functionality that is available when using a pandas EntitySet. There are, however, some limitations to be aware of when creating a Dask Entityset and then using it to generate a feature matrix. The most significant limitations are reviewed in more detail in this section.

Note

If the limitations of using a Dask EntitySet are problematic for your problem, you may still be able to compute a larger-than-memory feature matrix by partitioning your data as described in Improving Computational Performance.

Supported Primitives#

When creating a feature matrix from a Dask EntitySet, only certain primitives can be used. Primitives that rely on the order of the entire DataFrame or require an entire column for computation are currently not supported when using a Dask EntitySet. Multivariable and time-dependent aggregation primitives also are not currently supported.

To obtain a list of the primitives that can be used with a Dask EntitySet, you can call featuretools.list_primitives(). This will return a table of all primitives. Any primitive that can be used with a Dask EntitySet will have a value of True in the dask_compatible column.

[5]:
primitives_df = ft.list_primitives()
dask_compatible_df = primitives_df[primitives_df["dask_compatible"] == True]
dask_compatible_df.head()
[5]:
name type dask_compatible spark_compatible description valid_inputs return_type
14 sum aggregation True True Calculates the total addition, ignoring `NaN`. <ColumnSchema (Semantic Tags = ['numeric'])> <ColumnSchema (Semantic Tags = ['numeric'])>
18 num_true aggregation True False Counts the number of `True` values. <ColumnSchema (Logical Type = Boolean)>, <Colu... <ColumnSchema (Logical Type = IntegerNullable)...
26 min aggregation True True Calculates the smallest value, ignoring `NaN` ... <ColumnSchema (Semantic Tags = ['numeric'])> <ColumnSchema (Semantic Tags = ['numeric'])>
27 max aggregation True True Calculates the highest value, ignoring `NaN` v... <ColumnSchema (Semantic Tags = ['numeric'])> <ColumnSchema (Semantic Tags = ['numeric'])>
43 mean aggregation True True Computes the average for a list of values. <ColumnSchema (Semantic Tags = ['numeric'])> <ColumnSchema (Semantic Tags = ['numeric'])>
[6]:
dask_compatible_df.tail()
[6]:
name type dask_compatible spark_compatible description valid_inputs return_type
193 greater_than transform True False Determines if values in one list are greater t... <ColumnSchema (Logical Type = Datetime)>, <Col... <ColumnSchema (Logical Type = BooleanNullable)>
194 modulo_numeric_scalar transform True True Computes the modulo of each element in the lis... <ColumnSchema (Semantic Tags = ['numeric'])> <ColumnSchema (Semantic Tags = ['numeric'])>
196 time_since transform True False Calculates time from a value to a specified cu... <ColumnSchema (Logical Type = Datetime)> <ColumnSchema (Semantic Tags = ['numeric'])>
197 multiply_numeric_boolean transform True False Performs element-wise multiplication of a nume... <ColumnSchema (Logical Type = Boolean)>, <Colu... <ColumnSchema (Semantic Tags = ['numeric'])>
200 hour transform True True Determines the hour value of a datetime. <ColumnSchema (Logical Type = Datetime)> <ColumnSchema (Logical Type = Ordinal: [0, 1, ...

DataFrame Limitations#

Featuretools stores the DataFrames that make up an EntitySet as Woodwork DataFrames which include additional typing information about the columns that are in the DataFrame. When adding a DataFrame to an EntitySet, Woodwork will attempt to infer the logical types for any columns that do not have a logical type defined. This inference process can be quite expensive for Dask DataFrames. In order to skip type inference and speed up the process of adding a Dask DataFrame to an EntitySet, users can specify the logical type to use for each column in the DataFrame. A list of available logical types can be obtained by running featuretools.list_logical_types(). To learn more about the limitations of a Dask dataframe with Woodwork typing, see the Woodwork guide on Dask dataframes.

By default, Woodwork checks that pandas DataFrames have unique index values. Because performing this same check with Dask would require an expensive compute operation, this check is not performed when adding a Dask DataFrame to an EntitySet. When using Dask DataFrames, users must ensure that the supplied index values are unique.

When using a pandas DataFrames, the ordering of the underlying DataFrame rows is maintained by Featuretools. For a Dask DataFrame, the ordering of the DataFrame rows is not guaranteed, and Featuretools does not attempt to maintain row order. If ordering is important, close attention must be paid to any output to avoid issues.

EntitySet Limitations#

When creating a Featuretools EntitySet that will be made of Dask DataFrames, all of the DataFrames used to create the EntitySet must be of the same type, either all Dask DataFrames or all pandas DataFrames. Featuretools does not support creating an EntitySet containing a mix of Dask and pandas DataFrames.

Additionally, EntitySet.add_interesting_values() cannot be used in Dask EntitySets to find interesting values; however, it can be used set a column’s interesting values with the values parameter.

[7]:
values_dict = {"values": [12, 103]}
es.add_interesting_values(dataframe_name="dask_input_df", values=values_dict)

es["dask_input_df"].ww.columns["values"].metadata
[7]:
{'dataframe_name': 'dask_input_df',
 'entityset_id': 'dask_es',
 'interesting_values': [12, 103]}

DFS Limitations#

There are a few key limitations when generating a feature matrix from a Dask EntitySet.

If a cutoff_time parameter is passed to featuretools.dfs() it should be a single cutoff time value, or a pandas DataFrame. The current implementation will still work if a Dask DataFrame is supplied for cutoff times, but a .compute() call will be made on the DataFrame to convert it into a pandas DataFrame. This conversion will result in a warning, and the process could take a considerable amount of time to complete depending on the size of the supplied DataFrame.

Additionally, Featuretools does not currently support the use of the approximate or training_window parameters when working with Dask EntitySets, but should in future releases.

Finally, if the output feature matrix contains a boolean column with NaN values included, the column type may have a different datatype than the same feature matrix generated from a pandas EntitySet. If feature matrix column data types are critical, the feature matrix should be inspected to make sure the types are of the expected types, and recast as necessary.

Other Limitations#

In some instances, generating a feature matrix with a large number of features has resulted in memory issues on Dask workers. The underlying reason for this is that the partition size of the feature matrix grows too large for Dask to handle as the number of feature columns grows large. This issue is most prevalent when the feature matrix contains a large number of columns compared to the DataFrames in the EntitySet. Possible solutions to this problem include reducing the partition size used when creating the DataFrames or increasing the memory available on Dask workers.

Currently featuretools.encode_features() does not work with a Dask DataFrame as input. This will hopefully be resolved in a future release of Featuretools.

The utility function featuretools.make_temporal_cutoffs() will not work properly with Dask inputs for instance_ids or cutoffs. However, as noted above, if a cutoff_time DataFrame is supplied to dfs, the supplied DataFrame should be a pandas DataFrame, and this can be generated by supplying pandas inputs to make_temporal_cutoffs().

The use of featuretools.remove_low_information_features() cannot currently be used with a Dask feature matrix.

When manually defining a Feature, the use_previous parameter cannot be used if this feature will be applied to calculate a feature matrix from a Dask EntitySet.

Dask string[pyarrow]#

Featuretools may have issues with the new string storage model used by Dask. To workaround this, add dask.config.set({'dataframe.convert-string': False}), prior to running dask operations.