Methods and processes used in supervised learning approaches for machine learning that aim to identify, enhance, and optimize training data to learn from specific aspects of an input training data set. More specifically, feature engineering is an approach that transforms data to create new variables that aren’t in the training set. Feature engineering is used to produce new features for supervised and unsupervised learning. Methods used in Feature Engineering include: Feature Creation to create new aggregate or augmented data that doesn’t exist in the data set (such as counting words in text), Feature Transformation to modify data from one representation to another, Feature Extraction to select and extract data from a data set that will provide value. Generally, feature engineering would require selecting the necessary data, transforming the required training data and adding or augmenting the data with additional features in order to make machine learning algorithms more effective, simplifying and speeding up the model and the training process, and improving model accuracy. Before the advent of advanced forms of deep learning, machine learning engineers would have to manually select, manipulate, and otherwise transform the raw training data to enhance the specific variables or features in which a machine learning would have to learn from the training data. Feature engineering and feature extraction used to be time-consuming parts of the machine learning workflow. Deep learning approaches to machine learning allowed for the algorithms to automatically identify and extract the features that were important for the learning process.
Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!