The most basic, “vanilla” general kind of neural network in which the inputs flow from one side of the network forward to one or more hidden layers and finally to the output. In a feed-forward neural net, information moves in only in the forward direction from the input nodes, through the hidden nodes, and to the output nodes without cycles or loops in the network. Usually used as a simple example for how deep learning neural networks work, along with backpropagation, gradient descent, and other innovations in deep learning neural networks. A Feed-forward network with multiple layers is also referred to as a Multi-layer perceptron, since it basically just extends the perceptron model, which is a single-layer network, with only an input layer and an output layer.