SmartEngine  1.6.0
Namespaces | Classes | Enumerations
SmartEngine.NeuralNetworks Namespace Reference

Classes

class  A2CTrainer
 The A2C Trainer is a reinforcement learning trainer that is composed of two parts: an actor sub graph and a critic sub-graph. More...
 
class  ActivationNode
 Applies an activation function to the input More...
 
class  AddNode
 Component wise addition of two inputs. More...
 
class  Agent
 Agents are used to track the performance of one instance of a network. More...
 
class  AgentDataStore
 The agent data store keeps experience data for the purpose of training. Some RL trainers don't store long term data in the store, while others keep a long history of data. More...
 
class  BufferInput
 Input node into the AI graph. Allows for specifying a tensor input. More...
 
class  ChoiceNode
 Returns a random choice (as an integer) from the input node's probability distribution on each call to RetrieveOutput(). More...
 
class  ComponentInput
 Input node into a component. At the time of component construction, a node can be fed into this input. The incoming node must have the same dimension as this node. This node will then act as a passthrough for the incoming node. In this way, the component can be used at any point in the graph; not just the beginning. More...
 
struct  ComponentInputBinding
 Dynamically binds a node to a component input. More...
 
class  ConcatNode
 Merges two or more node streams into one. Each input stream must have the same number of rows. The output dimension is the sum of the input dimensions. More...
 
class  Constants
 Common constants More...
 
class  Context
 Every node in the AI graph must belong to the same context. More...
 
class  Conv2DInfo
 
class  Conv2DLayer
 2D convolution layer. Input is interpreted as a 2D grid laid in row major order. Filters are applied to blocks of the input grid at a time, producing another 2D grid as output. This 2D grid can be fed into neuron layers where it will be treated as a 1D array. More...
 
class  CuriosityModule
 A curiosity module is a way of rewarding an agent for behavior not yet seen. Rewards are given based on how expected the next set of observations are given the previous observations and the chosen actions. Unexpected results are given larger rewards than behavior the agent saw in the past, thus promoting the agent to be curious and explore the world. More...
 
class  D4PGTrainer
 The D4PGTrainer is a reinforcement learning trainer that is composed of two parts: an actor sub graph and a critic sub-graph. Unlike A2C and PPO, the critic graph is created and managed internally by SmartEngine. You only need to supply the actor graph. More...
 
class  DivideNode
 Component wise division of two inputs. More...
 
class  ExpNode
 Component wise natural exponentiation of X More...
 
class  ExtractNode
 Extracts a subset of columns from the input node. The start index + column count must not exceed the output dimension of the input node. More...
 
class  GeneSwapInfo
 GeneticTrainer gene swap info More...
 
class  GeneticAgentTrainer
 A genetic trainer that uses agents to automatically set the loss of each chromosome. Agents should be mapped to a chromosome after creation by calling MapAgentToChromosome() More...
 
class  GeneticTrainer
 Standard instance of IGeneticTrainer. More...
 
class  GeneticTrainerInitializationInfo
 Initializes the GeneticTrainer with a new population More...
 
class  GeneticTrainingInfo
 GeneticTrainer parameter info More...
 
class  GradientDescentTrainer
 Trains a set of networks using gradient descent. Training with GradientDescentTrainer requires a Loss structure and thus the output of the network must be known. If only a relative loss about the network is known, a GeneticTrainer needs to be used. More...
 
class  GradientDescentTrainingInfo
 GradientDescentTrainer training info More...
 
class  Graph
 A graph is a collection of buffers and nodes that together form a neural network. The graph is created from a json definition file. After creation, the contents of the graph can be loaded from or saved to disk. More...
 
class  GraphComponent
 A graph component allows graphs to embed other graphs. Nodes within components can be referenced using the following syntax: [Component Name]:[Component Graph Node Name] References can be chained together to dive to sub components. More...
 
class  GraphController
 Represents a use of a graph. Owners of the controller feed data to the controller at regular intervals. When the graph model is executed, the controller will have data that can be acted upon More...
 
class  GraphInput
 Base class for input nodes into the AI graph More...
 
class  GraphInputOutput
 Views on top of models deal with this class to inject and extract data from the underlying model. More...
 
class  GraphManager
 Connects graph controllers with graph models. Graph controllers define a single use of a graph. They pipe data to the model and later act on the output of the model. The model (usually a graph, but can be a dummy class) has the job of aggregating the input from one or more controllers and producing an output for each. More...
 
class  GraphModelOutput
 Used to return output back to controllers. More...
 
class  GraphNode
 A logical node in the AI graph. Some nodes, like NeuralNetwork, are composed of other nodes (neuron layers). More...
 
interface  IAgentFactory
 Trainers implement this to create agents. More...
 
interface  IGeneticTrainer
 Trains a NeuronLayer / NeuralNetwork using a genetic algorithm. With each step, networks are sorted by loss. The top networks are left unchanged, while the bottom networks are replaced by new ones. The new networks are formed by mixing the weights of the top networks and randomly mutating weights / neurons. Lastly, the bottom percent of the networks can be replaced with completely random weights, adding some variability to the pool. More...
 
interface  IGraphModel
 
class  LayerInfo
 Layer information More...
 
class  LengthNode
 Gets the length of a vector. Output is a one dimensional value equal to the sqrt of the square sums of the columns. More...
 
class  LogNode
 Component wise natural log of X More...
 
class  Loss
 The loss of a NeuralNetwork is computed using the formula (Expected Ouput - Actual Output)^2 The mean of all these values across the output tensor is used when crunching down the loss to a single value. More...
 
class  LossTrainer
 Base class for NeuralNetwork loss trainers More...
 
class  LossTrainingMethodInfo
 
class  MaximumNode
 Takes the maximum of the inputs. More...
 
class  MinimumNode
 Takes the minimum of the inputs. More...
 
class  MultiplexerNode
 Uses the value from the selector to decide which of the inputs to pass through. The selected input will be transmitted without adjustment. Each input should have the same dimension. The selector should have a dimension of 1 and a sigmoid activation. More...
 
class  MultiplyNode
 Component wise multiplication of two inputs. More...
 
class  MutationInfo
 GeneticTrainer mutation info More...
 
class  NegateNode
 Component wise negation of X More...
 
class  NeuronLayer
 A neuron layer is the trainable unit in the neural network graph. These can chained together to allow the network to learn and represent complex relationships. More...
 
class  NormalizeNode
 Treats spans of columns in the input as vectors to be normalized. Columns not covered by a span will be passed through unchanged. The output dimension will be equal to the input dimension. More...
 
class  NormalNode
 Returns a random value from the normal distribution given by the input nodes. NOTE: Input takes in the variance and not the standard deviation. More...
 
class  Parameter
 A parameter is similar to a NeuronLayer in that it is trainable. However, it takes no input and is simply a layer of weights that can be used as input into other parts of the graph. More...
 
class  PoolingLayer
 2D pooling layer. Typically used after a conv2d layer to reduce the output dimension. Fixed size of 2x2 blocks. More...
 
class  PPOTrainer
 The PPO Trainer is a reinforcement learning trainer that is composed of two parts: an actor sub graph and a critic sub-graph. One of the differences between PPO and A2C is that PPO works off of large batches of data. Instead of waiting for a small batch before training, PPO will wait until there is a mega batch, called the trajectory. When we've received that much data, we train over it in normal batch sizes for a couple of times and then wait for the next trajectory. More...
 
class  RegularizationLossInfo
 Regularization tries to keep the weight values from exploding during gradient descent by adding L1 and L2 loss of the weights. More...
 
class  RLTrainer
 Base class for all reinforcement learning trainers. More...
 
class  Rotate2DNode
 Rotation of a 2D vector. Output is also a 2D vector. Rotation value is expected to be in radians. More...
 
class  ScalarValueNode
 Returns a constant scalar value. More...
 
class  StandardGraphModel
 
class  StopGradientNode
 Stops the gradient from flowing backwards from this point. Only meaningful when training with gradient descent or reinforcement learning. More...
 
class  SubtractNode
 Component wise subtraction of two inputs. More...
 
class  UnpoolingLayer
 2D unpooling layer. Typically used before a conv2d transpose layer to expand the output dimension. Fixed size of 2x2 blocks. More...
 
class  UserGraphModel
 Takes input from a controller and produces an output. If using a graph as the model, this class does not need to be used or extended. However, you can provide a dervied instance of this class to create "dummy" models. More...
 
class  UserStandardGraphModel
 Allows for overriding of the standard graph model at the cost of another level of indirection. Can be used during training to modify the graph output before it is sent back to the controller. Consider fixing up data in the controller instead of using this class at runtime. More...
 

Enumerations

enum  ChoiceType { ChoiceType.Random, ChoiceType.Max }
 Defines the descision algorithm of the ChoiceNode More...
 
enum  DeviceType { DeviceType.Default, DeviceType.Cpu, DeviceType.Gpu }
 The type of context to create. More...
 
enum  Conv2DType { Conv2DType.Conv2D, Conv2DType.Conv2DWithBias, Conv2DType.Conv2DTranspose, Conv2DType.Conv2DTransposeWithBias }
 
enum  GeneticAgentScoringMethod : byte { GeneticAgentScoringMethod.Sum, GeneticAgentScoringMethod.Average }
 Defines how the trainer deals with chromosome scores from multiple agents. More...
 
enum  MutationTarget { MutationTarget.Auto, MutationTarget.Weight, MutationTarget.Neuron }
 
enum  GeneSwapTarget { GeneSwapTarget.Weight, GeneSwapTarget.Neuron }
 
enum  GradientDescentTrainingAlgorithm { GradientDescentTrainingAlgorithm.Adam, GradientDescentTrainingAlgorithm.AdaMax }
 
enum  LossTrainingMethod { LossTrainingMethod.WholeDataset, LossTrainingMethod.Stochastic }
 
enum  LayerType { LayerType.LinearNoBias, LayerType.Linear, LayerType.LSTM }
 The type of layer to create More...
 
enum  ActivationType {
  ActivationType.None, ActivationType.Sigmoid, ActivationType.Tanh, ActivationType.Relu,
  ActivationType.Relu6, ActivationType.LeakyRelu, ActivationType.Softmax, ActivationType.Elu,
  ActivationType.Selu, ActivationType.Softplus
}
 The activation function to apply to the output of the NeuronLayer More...
 
enum  PoolType : byte { PoolType.Max, PoolType.Average, Count }
 

Enumeration Type Documentation

◆ ActivationType

The activation function to apply to the output of the NeuronLayer

Enumerator
None 

No activation

Sigmoid 

Maps output into the range [0..1]

Tanh 

Maps output into the range [-1..1]

Relu 

max(x, 0)

Relu6 

min(max(x, 0), 6)

LeakyRelu 

Similar to Relu, but allows for scaled negative values

Softmax 

Converts the outputs into weighted probabilities.

Elu 

Similar to Relu, but uses an exponential for negative values

Selu 

Similar to Relu, but uses an exponential for negative values. Positive values are scaled. Best when training with gradient descent. The inputs to the network should be be in the [-2..2] range.

Softplus 

Maps output into a positive number

◆ ChoiceType

Defines the descision algorithm of the ChoiceNode

Enumerator
Random 

Treat the output as a probability distribution and sample from that distribution on each evaulation. Returns the index of the chosen output.

Max 

Return the index of the output with the highest value.

◆ Conv2DType

Enumerator
Conv2D 

2D Convolution layer, no bias

Conv2DWithBias 

2D Convolution layer, with bias

Conv2DTranspose 

2D Transpose convolution layer (sometimes called deconvolution), no bias

Conv2DTransposeWithBias 

2D Transpose convolution layer (sometimes called deconvolution), with bias

◆ DeviceType

The type of context to create.

Enumerator
Default 

Chooses the best device for the current system.

Cpu 

Train and evaulate only using the CPU.

Gpu 

Train and evaulate using only the GPU.

◆ GeneSwapTarget

Enumerator
Weight 

Swapping occurs at the individual weight level across all neurons.

Neuron 

Swapping applies to all weights in a neuron.

◆ GeneticAgentScoringMethod

Defines how the trainer deals with chromosome scores from multiple agents.

Enumerator
Sum 

Sum the scores of all agents associated with the chromosome

Average 

Average the scores of all agents associated with the chromosome

◆ GradientDescentTrainingAlgorithm

Enumerator
Adam 

Good default choice

AdaMax 

Can be more stable in certain circumstances, such as RL training

◆ LayerType

The type of layer to create

Enumerator
LinearNoBias 

Linear with no bias (X * W)

Linear 

Linear with bias (X * W + B)

LSTM 

Long short term memory layer

◆ LossTrainingMethod

Enumerator
WholeDataset 

Train against all data in the buffer in groups of size BatchSize.

Stochastic 

Train by random sampling of the input data in groups of size BatchSize.

◆ MutationTarget

Enumerator
Auto 

Chooses the default value depending on the node type.

Weight 

Mutation occurs at the individual weight level across all neurons.

Neuron 

Mutation applies to all weights in a neuron.

◆ PoolType

Enumerator
Max 

Outputs the maximum value of the block of inputs

Average 

Takes the average value of the block of inputs