Feature engineering


Feature engineering is the process of using domain knowledge to extract features from raw data via data mining techniques. These features can be used to improve the performance of machine learning algorithms. Feature engineering can be considered as applied machine learning itself.

Features

A feature is an attribute or property shared by all of the independent units on which analysis or prediction is to be done. Any attribute could be a feature, as long as it is useful to the model.
The purpose of a feature, other than being an attribute, would be much easier to understand in the context of a problem. A feature is a characteristic that might help when solving the problem.

Importance

Features are important to predictive models and influence results.
It is asserted that feature engineering plays an important part of Kaggle competitions and machine learning project's success or failure.

Process

The feature engineering process is:
A feature could be strongly relevant, relevant, weakly relevant or irrelevant. Even if some features are irrelevant, having too many is better than missing those that are important. Feature selection can be used to prevent overfitting.

Feature explosion

Feature explosion can be caused by feature combination or feature templates, both leading to a quick growth in the total number of features.
Feature explosion can be stopped via techniques such as: regularization, kernel method, feature selection.

Automation

Automation of feature engineering is a research topic that dates back to at least the late 1990s. The academic literature on the topic can be roughly separated into two strings: First, Multi-relational decision tree learning, which uses a supervised algorithm that is similar to a decision tree. Second, more recent approaches, like Deep Feature Synthesis, which use simpler methods.
Multi-relational decision tree learning generates features in the form of SQL queries by successively adding new clauses to the queries. For instance, the algorithm might start out with

SELECT COUNT FROM ATOM t1 LEFT JOIN MOLECULE t2 ON t1.mol_id = t2.mol_id GROUP BY t1.mol_id

The query can then successively be refined by adding conditions, such as "WHERE t1.charge <= -0.392".
However, most of the academic studies on MRDTL use implementations based on existing relational databases, which results in many redundant operations. These redundancies can be reduced by using tricks such as tuple id propagation. More recently, it has been demonstrated that the efficiency can be increased further by using incremental updates, which completely eliminates redundancies.
In 2015, researchers at MIT presented the Deep Feature Synthesis algorithm and demonstrated its effectiveness in online data science competitions where it beat 615 of 906 human teams. Deep Feature Synthesis is available as an open source library called Featuretools. That work was followed by other researchers including IBM's OneBM and Berkeley's ExploreKit. The researchers at IBM stated that feature engineering automation "helps data scientists reduce data exploration time allowing them to try and error many ideas in short time. On the other hand, it enables non-experts, who are not familiar with data science, to quickly extract value from their data with a little effort, time, and cost."