SmartEngine
1.6.0
|
Regularization tries to keep the weight values from exploding during gradient descent by adding L1 and L2 loss of the weights. More...
#include <GradientDescentTrainer.h>
Public Attributes | |
float | L1Weights = 0.0f |
L1 loss multiplier applied to weights. More... | |
float | L1Biases = 0.0f |
L1 loss multiplier applied to biases. More... | |
float | L2Weights = 1e-6f |
L2 loss multiplier applied to weights. More... | |
float | L2Biases = 1e-6f |
L2 loss multiplier applied to biases. More... | |
Regularization tries to keep the weight values from exploding during gradient descent by adding L1 and L2 loss of the weights.
L1 loss is defined as Sum(Abs(Weight_i)) L2 loss is defined as Sum(Weight_i ^ 2) / 2 The total regularization loss is L1 + L2
float SmartEngine::RegularizationLossInfo::L1Biases = 0.0f |
L1 loss multiplier applied to biases.
float SmartEngine::RegularizationLossInfo::L1Weights = 0.0f |
L1 loss multiplier applied to weights.
float SmartEngine::RegularizationLossInfo::L2Biases = 1e-6f |
L2 loss multiplier applied to biases.
float SmartEngine::RegularizationLossInfo::L2Weights = 1e-6f |
L2 loss multiplier applied to weights.