[ L’Algorithme de rétro-propagation de gradient dans le perceptron multicouche: Bases et étude de cas ]
Volume 32, Issue 2, March 2021, Pages 271–290
Héritier Nsenge Mpia1 and Inipaivudu Baelani Nephtali2
1 Département d’Informatique de Gestion, Université de l’Assomption au Congo, B.P 104, Butembo, Nord-Kivu, RD Congo
2 Département d’Informatique de Gestion, Université de l’Assomption au Congo, B.P 104, Butembo, Nord-Kivu, RD Congo
Original language: French
Copyright © 2021 ISSR Journals. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Training a multi-layer neural network is sometimes difficult, especially for novices in Artificial Intelligence. Many people believe that this training must be relayed to computers in order to be able to perform these ultra-powerful calculations. As a result, we can't figure out what is going on behind these calculations, thinking that there is too much mathematics, making it difficult for humans to understand what is at stake. Far from this mythical caricature stuck to neural networks. The training of a neural network consists in finding synaptic weights such that the output layer allows to classify with precision the observed values of a training set with the aim of allowing the created model to present generalisation capacities on examples that it will never have encountered during the training phase.
Author Keywords: Artificiel Intelligence, Loss Function, Epoch, Logitic Function, Neural Network.
Volume 32, Issue 2, March 2021, Pages 271–290
Héritier Nsenge Mpia1 and Inipaivudu Baelani Nephtali2
1 Département d’Informatique de Gestion, Université de l’Assomption au Congo, B.P 104, Butembo, Nord-Kivu, RD Congo
2 Département d’Informatique de Gestion, Université de l’Assomption au Congo, B.P 104, Butembo, Nord-Kivu, RD Congo
Original language: French
Copyright © 2021 ISSR Journals. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
Training a multi-layer neural network is sometimes difficult, especially for novices in Artificial Intelligence. Many people believe that this training must be relayed to computers in order to be able to perform these ultra-powerful calculations. As a result, we can't figure out what is going on behind these calculations, thinking that there is too much mathematics, making it difficult for humans to understand what is at stake. Far from this mythical caricature stuck to neural networks. The training of a neural network consists in finding synaptic weights such that the output layer allows to classify with precision the observed values of a training set with the aim of allowing the created model to present generalisation capacities on examples that it will never have encountered during the training phase.
Author Keywords: Artificiel Intelligence, Loss Function, Epoch, Logitic Function, Neural Network.
Abstract: (french)
Entrainer un réseau neuronal multicouche s’avère difficile, surtout pour des novices en Intelligence artificielle. Nombreuses personnes estiment qu’il faut relayer cet entrainement aux ordinateurs pour pouvoir effectuer ces calculs ultra puissants. Du coup, on n’arrive pas à cerner ce qui se passe derrière ces calculs, pensant qu’il y a trop de mathématiques, difficile pour l’homme d’en comprendre des enjeux. Loin de là cette caricature mythique collée aux réseaux de neurones. L’entraînement d’un réseau de neurones consiste à trouver des poids synaptiques tels que la couche de sortie permette de classer avec précision les valeurs observées d’un ensemble d’entraînement dans le but de permettre à ce que le modèle créé présente des capacités de généralisation sur des exemples qu’il n’aura jamais rencontrés lors de la phase d’entrainement.
Author Keywords: Intelligence Artificielle, Fonction Coût, Itération, Fonction Logistique, Réseau de Neurones.
How to Cite this Article
Héritier Nsenge Mpia and Inipaivudu Baelani Nephtali, “Gradient Back-Propagation Algorithm in the Multi layer Perceptron: Foundations and Case Study,” International Journal of Innovation and Applied Studies, vol. 32, no. 2, pp. 271–290, March 2021.