SmartEngine  1.6.0
Graph Json Schema

Example

{
"Nodes": [
{
"Name": "Input",
"Type": "BufferInput",
"Parameters": {
"Dimension": 12
}
},
{
"Name": "LinearLayer1",
"Type": "NeuronLayer",
"Parameters": {
"Input": "Input",
"Type": "Linear",
"ActivationType": "Selu",
"NeuronCount": 32
}
},
{
"Name": "LinearLayer2",
"Type": "NeuronLayer",
"Parameters": {
"Input": "LinearLayer1",
"Type": "Linear",
"ActivationType": "Selu",
"NeuronCount": 24
}
},
{
"Name": "Output",
"Type": "NeuronLayer",
"Parameters": {
"Input": "LinearLayer2",
"Type": "Linear",
"ActivationType": "Tanh",
"NeuronCount": 2
}
}
]
}

Schema

Top Level Members

  • Name: The name of the graph
  • SequenceLength: The sequence length used when stepping the network as part of the graph MVC system.
  • Nodes: Array of nodes in the graph

Nodes

Activation

Applies an activation function to the input. C# equivalent: ActivationNode

  • Name: The name of the node
  • Type: "Activation"
  • Parameters: An array of the following parameters
    • Input: The name of the node that is input to this node
    • Type: The type of activation to apply. See the activation type list in NeuronLayer.

Add

Component wise addition of two nodes. C# equivalent: AddNode

  • Name: The name of the node
  • Type: "Add"
  • Parameters: An array of the following parameters
    • InputA: The name of the node in the graph that is input A in the expression (A + B)
    • InputB: The name of the node in the graph that is input B in the expression (A + B)

BufferInput

This node is used as input into the graph or can be filled with the expected output when training using gradient descent. C# equivalent: BufferInput

  • Name: The name of the node
  • Type: "BufferInput"
  • Parameters: Array of parameters
    • Dimension: A measure of the size of the buffer. Also the number of columns. Data set on the buffer must be a multiple of this value.

Choice

Chooses an index from the input each evaluation, weighting the choice by the input values. Returns the index of the chosen value. C# equivalent: ChoiceNode

  • Name: The name of the node
  • Type: "Choice"
  • Parameters: An array of the following parameters
    • Input: The name of the node that is input to this node
    • Type: The type of choice to make
      • "Max": Take the max value of the inputs along the specified axis
      • "Random": Treat the input values as a probability distribution and take a random value from it. Input values should have a 'None' activation because a softmax will be applied.
    • Axis: 0 to specify choosing over the rows. The dimension of the output will be the dimension of the input. 1 to specify choosing over the columns. The dimension of the output will be the number of input rows. 1 is the default behavior

Component Input

Input node into a component graph that allows for connections to be dynamically bound. If no binding is specified at the time of creation, a buffer input node will be connected automatically. The name of the buffer input node will be of the form "<Component Input Name>Buffer".

  • Name: The name of the node
  • Type: "ComponentInput"
  • Parameters: An array of the following parameters
    • Dimension: A measure of the size of the input. Also the number of columns. Data set on the input's buffer must be a multiple of this value and any bound input nodes must have this dimension.

Concat

Concatenates one or more input streams into a single output stream. The input streams must have the same row count at runtime, but can have different dimensions. C# equivalent: ConcatNode

  • Name: The name of the node
  • Type: "Concat"
  • Parameters: An array of the following parameters
    • Inputs: An array of names of nodes that will be combined. There must be at least two inputs

Conv2D Layer

2D convolution layer. Input is interpreted as a 2D grid laid in row major order. Filters are applied to blocks of the input grid at a time, producing another 2D grid as output. This 2D grid can be fed into neuron layers where it will be treated as a 1D array.

  • Name: The name of the node
  • Type: "Conv2DLayer"
  • Parameters: An array of the following parameters
    • Type: The type of convolution to perform. Must be one of the following values
      • "Conv2D": Regular 2D convolution. Filter size and stride should fit the input exactly.
      • "Conv2DWithBias": 2D convolution with an additional bias added.
      • "Conv2DTranspose": An inverse convolution. If regular convolution is used to extract data from an image, the transpose could be used to construct an image from the extracted data.
      • "Conv2DTransposeWithBias": An inverse convolution with bias.
    • Input: The name of the node that is input to the conv2d layer
    • InputWidth: The width of input grid. Not necessary to specify if the input is a Conv2D / Pooling / Unpooling layer.
    • InputHeight: The height of the input grid. Not necessary to specify if the input is a Conv2D / Pooling / Unpooling layer.
    • InputChannels: The number of channels for each cell in the grid. For instance, an RGB image would have this set to 3. Not necessary to specify if the input is a Conv2D / Pooling / Unpooling layer.
    • FilterWidth: The width of each filter used in convolution
    • FilterHeight: The height of each filter used in convolution
    • StrideWidth: The number of cells to skip horizontally when iterating over the grid
    • StrideHeight: The number of cells to skip vertically when iterating over the grid
    • OutputChannels: The number of output channels to produce. This can be thought of the number of filters to apply.
    • ActivationType: The type of activation to apply to the output of this layer. See NeuronLayer for a list of valid values.
    • WeightStandardDeviation: Optional override on the standard deviation of the weights when the layer filters are randomized. A value of 1 is used if not specified.

Conv2D Layer Instance

Instances a conv2d layer. Instances share the same weights, but have different inputs from the original.

  • Name: The name of the node
  • Type: "Conv2DLayerInstance"
  • Parameters: Array of parameters
    • Input: The name of the node that is input to the new instance
    • Instance: The name of the conv2d layer to instance

Divide

Component wise division of two nodes. C# equivalent: DivideNode

  • Name: The name of the node
  • Type: "Divide"
  • Parameters: An array of the following parameters
    • InputA: The name of the node in the graph that is input A in the expression (A / B)
    • InputB: The name of the node in the graph that is input B in the expression (A / B)

Exp

Outputs the component wise natural exponentiation the input

  • Name: The name of the node
  • Type: "Exp"
  • Parameters: An array of the following parameters
    • Input: The input into the expression e^X

Extract

Extracts a subset of columns from an input.

  • Name: The name of the node
  • Type: "Extract"
  • Parameters: An array of the following parameters
    • Input: The input node to extract columns from.
    • ColumnStartIndex: The integer inclusive starting column index to extract.
    • ColumnCount: The integer column count to extract. Set to -1 to take the rest of the columns in the input.

Graph Component

Allows for graphs to reference other graphs by resource name. Nodes within the component can be referenced by "<graph component name>:<component node name>". Graph components are not trained with the parent graph and are expected to be pre-trained. C# equivalent: GraphComponent

  • Name: The name of the node
  • Type: "GraphComponent"
  • Parameters: An array of the following parameters
    • GraphResource: The resource name of the graph to embed.
    • OutputNode: Optional output node in the referenced graph. If not specified, connections to the referenced graph must be made with the ':' name format.
    • InputBindings: Optional array of input connections into the component graph. The array must be of objects with the following parameters. This will replace connect nodes in the parent containing graph to component input nodes in the component graph.
      • InputName: The name of the component input node in the containing component graph
      • NodeName: The name of the node in the parent containiner class that will feed into the component graph.

Graph Component Instance

Instances a graph component. Instances share the same weights as the original. Any static neuron layers will be referenced directly instead of instanced, improving performance of common nodes.

  • Name: The name of the node
  • Type: "GraphComponentInstance"
  • Parameters: An array of the following parameters
    • Instance: The name of the graph component we are instancing.

Length

Treats the input as a vector and computes its length.

  • Name: The name of the node
  • Type: "Length"
  • Parameters: An array of the following parameters
    • Input: The vector to compute the length of. Can have any dimension.

Log

Outputs the component wise natural log the input

  • Name: The name of the node
  • Type: "Log"
  • Parameters: An array of the following parameters
    • Input: The input into the expression ln(X)

Maximum

Outputs the maximum of the two inputs

  • Name: The name of the node
  • Type: "Maximum"
  • Parameters: An array of the following parameters
    • A: First input node name
    • B: Second input node name

Minimum

Outputs the minimum of the two inputs

  • Name: The name of the node
  • Type: "Minimum"
  • Parameters: An array of the following parameters
    • A: First input node name
    • B: Second input node name

Multiplexer

Uses the value from the selector to decide which of the inputs to pass through. The selected input will be transmitted without adjustment. Each input should have the same dimension. The selector should have a dimension of 1 and a sigmoid activation.

Note: The selector input and its subgraph cannot be trained through gradient descent nor reinforcement learning.

  • Name: The name of the node
  • Type: "Multiplexer"
  • Parameters: An array of the following parameters
    • Selector: Decides which input to pass through based on the max value of the outputs. Should have a dimension equal to the number of inputs.
    • Inputs: Array of inputs that will be selected between.
    • SelectionMethod: Optional string of one of the following values. Defaults to "Max".
      • "Max": Select the max selector component.
      • "Random": Select a random input based on the distribution of the selector. The selector should have no activation applied as a softmax will be applied internally.

Multiply

Component wise multiplication of two nodes. C# equivalent: Multiply

  • Name: The name of the node
  • Type: "Multiply"
  • Parameters: An array of the following parameters
    • InputA: The name of the node in the graph that is input A in the expression (A * B)
    • InputB: The name of the node in the graph that is input B in the expression (A * B)

Negate

Outputs the component wise negation the input

  • Name: The name of the node
  • Type: "Negate"
  • Parameters: An array of the following parameters
    • Input: The input into the expression -X

Neuron Layer

Basic trainable unit in a graph. A neuron layer consists of trainable weights and biases that are multiplied against an input and have an activation function applied to the output. C# equivalent: NeuronLayer

  • Name: The name of the node
  • Type: "NeuronLayer"
  • Parameters: Array of parameters
    • Input: The name of the node that is input to the neuron layer
    • NeuronCount: The number of neurons in this layer. This is also the output dimension of the layer.
    • Type: The type of neuron layer. Must be one of the following values
      • "Linear": Output = activation(input * weights + bias)
      • "LinearNoBias": Output = activation(input * weights)
      • "LSTM": Long short term memory. A special type of neuron layer that is great at handling inputs that have some implicit time aspect. Do not specify an activation function when using an LSTM. Tanh is automatically applied. LSTM layers must be stepped to be useful. Step whenever the input changes (usually once per evaluation). Reset to clear the LSTM to a default state. See StepNeuronLayers and ResetNeuronLayers. See the Tips section for guidance on when to step and reset.
    • ActivationType: The activation function applied to the output of the neuron layer. One of the following values
      • "None": No activation function is applied. Use this on an LSTM layer
      • "Sigmoid": Squeezes the value into the range [0..1]
      • "Tanh": Squeezes the value into the range [-1..1]
      • "Relu": See Wikipedia
      • "Relu6": Similar to Relu, but capped at value of 6
      • "LeakyRelu": Similar to Relu, but allows for negative values
      • "Softmax": e^value / sum(e^value across all values). Useful for probability distributions. See Wikipedia
      • "Elu": See Wikipedia
      • "Selu": See Towards Data Science for an explanation. Works well with gradient descent when the input follows a normal distribution centered on 0 with a standard deviation of 1
      • "Softplus": See Wikipedia
      • "Normalize": Treats each output row as a vector and normalizes the result so its length is 1.0
      • "NormalizeAllowSmallLengths": Treats each output row as a vector. If the length of the vector is greater than 1.0, it is normalized to length 1.0. Otherwise it is passed unchanged.
    • WeightStandardDeviation: Optional override on the standard deviation of the weights when the layer is randomized. Normally a value of 1 is used, except in the case of a Selu activation function
    • StaticInstance: If true, this node will not be instanced, but rather referenced directly when instancing the graph it is in. Note that if set, this flag retroactively applies to all nodes in the input tree too.

Neuron Layer Instance

Instances a neuron layer. Instances share the same weights, but have different inputs from the original.

  • Name: The name of the node
  • Type: "NeuronLayerInstance"
  • Parameters: Array of parameters
    • Input: The name of the node that is input to the new instance
    • Instance: The name of the neuron layer to instance

Normal

Returns values from a normal distribution. Different values are returned with each evaluation. C# equivalent: NormalNode

  • Name: The name of the node
  • Type: "Normal"
  • Parameters: An array of the following parameters
    • Mean: The name of the node that will be used as the mean of the normal distribution. Must have the same dimension as the variance.
    • Variance: The name of the node that will be used as the variance of the normal distribution. Must have the same dimension as the mean. The variance is expected to have a None activation.

Normalize

Normalizes spans of columns across the input. C# equivalent: NormalizeNode

  • Name: The name of the node
  • Type: "Normalize"
  • Parameters: An array of the following parameters
    • Input: The input node to normalize.
    • ColumnSpans: Array of objects that define the columns we want to treat as vectors and normalize.
      • Start: The column start index.
      • Count: The number of columns the vector spans. Specify -1 to normalize from Start to the end of the row.
    • AllowSmallLengths: If specified and set to true, spans of length < 1.0 are passed through unchanged. Defaults to false.

Parameter

A trainable set of weights that have no input. C# equivalent: NormalizeNode

  • Name: The name of the node
  • Type: "Parameter"
  • Parameters: An array of the following parameters
    • Dimension: The number of weights in the parameter. The output of this node will be a [1 x Dimension] matrix.

Pooling Layer

2D pooling layer. Input is interpreted as a 2D grid laid in row major order. Blocks of 2x2 are condensed into a 1x1 block by either taking the average of the cells or the max value of the cells. Typically, this follows a conv2d layer.

  • Name: The name of the node
  • Type: "PoolingLayer"
  • Parameters: An array of the following parameters
    • Type: The type of pooling to perform. Must be one of the following values
      • "Max": Take the maximum of the input block
      • "Average": Take the average value of the input block
    • Input: The name of the node that is input to the pooling layer
    • InputWidth: The width of input grid. Not necessary to specify if the input is a Conv2D layer.
    • InputHeight: The height of the input grid. Not necessary to specify if the input is a Conv2D layer.

Rotate2D

Rotates a 2D vector by a specified radian amount.

  • Name: The name of the node
  • Type: "Rotate2D"
  • Parameters: An array of the following parameters
    • InputVector: The vector to rotate. Must have a dimension of 2.
    • InputRotation: The rotation amount, in radians. Must have a dimension of 1.

Scalar Value

A node which outputs a constant value as a 1x1 matrix.

  • Name: The name of the node
  • Type: "ScalarValue"
  • Parameters: An array of the following parameters
    • Value: Floating point output value of the node.

StopGradient

Stops the gradients from backpropagating further than this node. Useful for isolating parts of the graph to train when using gradient descent. Has no effect with genetic training. C# equivalent: StopGradientNode

  • Name: The name of the node
  • Type: "StopGradient"
  • Parameters: An array of the following parameters
    • Input: The name of the node that is input to this. The output of the StopGradient node is simply the input. However, backpropagation won't reach the input node.

Subtract

Component wise subtraction of two nodes. C# equivalent: SubtractNode

  • Name: The name of the node
  • Type: "Add"
  • Parameters: An array of the following parameters
    • InputA: The name of the node in the graph that is input A in the expression (A - B)
    • InputB: The name of the node in the graph that is input B in the expression (A - B)

Unpooling Layer

2D unpooling layer. Input is interpreted as a 2D grid laid in row major order. Each cell is expanded to a 2x2 block. This node cannot exist in isolation, but instead needs an associated pooling layer. The pooling layer's output shape and this nodes's input's shape must match. Typically precedes a conv2d (transpose) layer

  • Name: The name of the node
  • Type: "UnpoolingLayer"
  • Parameters: An array of the following parameters
    • Input: The name of the node that is input to the unpooling layer
    • PoolingLayer: The associated pooling layer this node matches with.