An encoder-decoder is a pair of neural networks in which one network encodes an original source of data, such as an image into a neural network representation and the paired network then decodes it back to its original image format. An autoencoder is an encoder-decoder pair that is used in unsupervised learning to automatically learn encodings from training data without labels. An autoencoder has two main parts: an encoder that maps the message to a code, and a decoder that reconstructs the message from the code. An optimal autoencoder would perform as close to perfect reconstruction as possible. Autoencoders are trained to ignore “noise”, which is meaningless data, in the data. Autoencoding is used for image de-noising and compressing images, generating new data from original training data, identifying anomalies and patterns in data (anomalies would be the noise), facial recognition, machine translation, sentiment analysis, describe or caption images, denoise input data, and other tasks. Regularized autoencoders try to find the pattern and eliminate noise, while Variational autoencoders introduce randomness to create new valid output data from input data.