Close this search box.

Glossary of Artificial Intelligence (AI), Machine Learning (ML), and Big Data Terms


Shorthand for “the backward propagation of errors”, backpropagation is a method used to accelerate the training of artificial neural networks (ANNs) to calculate gradients for the weights used in a specific trained neural network. The challenge is that as you have more neurons and more layers, you need to adjust the weights and biases for each neuron and connection, with lots of different losses and errors. Doing that in a reasonable amount of time is a challenge with big neural networks with lots of layers. Forward propagation is getting the inputs and weights from one node to the next to generate the output. The error of that output is then propagated backwards to previous layers to update them in a way that will get to convergence faster. Backpropagation adjusts or “fine tunes” the weights of the neural net based on the error (cost) that was realized in the previous training iteration (epoch) Getting to proper parameter settings through quick fine tuning is what makes the model reliable by reducing error thus achieving the desired level of generalization. Backpropagation was one of the innovations by Geoff Hinton that made deep learning networks a practical reality.

Get Certified on the Proven Path to Success with AI, Big Data & Analytics Projects

Login Or Register


Register to View Event


Get The Backpropagation


AI Best Practices

Get the Step By Step Checklist for AI Projects


Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!