SmartEngine
1.6.0
|
GradientDescentTrainer training info More...
#include <GradientDescentTrainer.h>
Public Attributes | |
GradientDescentTrainingAlgorithm | algorithm = GradientDescentTrainingAlgorithm::Adam |
The training algorithm to use More... | |
RegularizationLossInfo | regularizationLoss |
The regularization loss parameters to apply. More... | |
float | clipGradients = 0.0f |
If this value is greater than 0, the gradients will be clipped to the range [-ClipGradients, ClipGradients] before being applied. A value less than or equal to 0 will result in no clipping. More... | |
float | learnRate = 0.001f |
ADAM optimizer learn rate More... | |
float | beta1 = 0.9f |
ADAM optimizer beta1 value More... | |
float | beta2 = 0.999f |
ADAM optimizer beta2 value More... | |
float | epsilon = 1e-8f |
ADAM optimizer epsilon value More... | |
GradientDescentTrainer training info
GradientDescentTrainingAlgorithm SmartEngine::GradientDescentTrainingInfo::algorithm = GradientDescentTrainingAlgorithm::Adam |
The training algorithm to use
float SmartEngine::GradientDescentTrainingInfo::beta1 = 0.9f |
ADAM optimizer beta1 value
float SmartEngine::GradientDescentTrainingInfo::beta2 = 0.999f |
ADAM optimizer beta2 value
float SmartEngine::GradientDescentTrainingInfo::clipGradients = 0.0f |
If this value is greater than 0, the gradients will be clipped to the range [-ClipGradients, ClipGradients] before being applied. A value less than or equal to 0 will result in no clipping.
float SmartEngine::GradientDescentTrainingInfo::epsilon = 1e-8f |
ADAM optimizer epsilon value
float SmartEngine::GradientDescentTrainingInfo::learnRate = 0.001f |
ADAM optimizer learn rate
RegularizationLossInfo SmartEngine::GradientDescentTrainingInfo::regularizationLoss |
The regularization loss parameters to apply.