SmartEngine  1.6.0
Public Attributes | List of all members
SmartEngine::GradientDescentTrainingInfo Struct Reference

GradientDescentTrainer training info More...

#include <GradientDescentTrainer.h>

Public Attributes

GradientDescentTrainingAlgorithm algorithm = GradientDescentTrainingAlgorithm::Adam
 The training algorithm to use More...
 
RegularizationLossInfo regularizationLoss
 The regularization loss parameters to apply. More...
 
float clipGradients = 0.0f
 If this value is greater than 0, the gradients will be clipped to the range [-ClipGradients, ClipGradients] before being applied. A value less than or equal to 0 will result in no clipping. More...
 
float learnRate = 0.001f
 ADAM optimizer learn rate More...
 
float beta1 = 0.9f
 ADAM optimizer beta1 value More...
 
float beta2 = 0.999f
 ADAM optimizer beta2 value More...
 
float epsilon = 1e-8f
 ADAM optimizer epsilon value More...
 

Detailed Description

GradientDescentTrainer training info

Member Data Documentation

◆ algorithm

GradientDescentTrainingAlgorithm SmartEngine::GradientDescentTrainingInfo::algorithm = GradientDescentTrainingAlgorithm::Adam

The training algorithm to use

◆ beta1

float SmartEngine::GradientDescentTrainingInfo::beta1 = 0.9f

ADAM optimizer beta1 value

◆ beta2

float SmartEngine::GradientDescentTrainingInfo::beta2 = 0.999f

ADAM optimizer beta2 value

◆ clipGradients

float SmartEngine::GradientDescentTrainingInfo::clipGradients = 0.0f

If this value is greater than 0, the gradients will be clipped to the range [-ClipGradients, ClipGradients] before being applied. A value less than or equal to 0 will result in no clipping.

◆ epsilon

float SmartEngine::GradientDescentTrainingInfo::epsilon = 1e-8f

ADAM optimizer epsilon value

◆ learnRate

float SmartEngine::GradientDescentTrainingInfo::learnRate = 0.001f

ADAM optimizer learn rate

◆ regularizationLoss

RegularizationLossInfo SmartEngine::GradientDescentTrainingInfo::regularizationLoss

The regularization loss parameters to apply.