Features
- Adding pooling & unpooling layer graph nodes

Activation Node
- Fixing node always outputting a "None" activation

Gradient Descent Trainer
- More fair selection of neurons when there is a wide difference in neuron counts between nodes
- Improving perf of the genetic trainer with large graphs
- Loss is evaluated before stepping when using a genetic trainer so the values are sorted. This means individual chromosome losses can't be changed when using a loss object.

GPU Training
- GPU graphs use significantly less GPU memory
- Fixing Conv2D gradients on the GPU
- Minor optimization on GPU to reduce the amount of data shuffled
- Better stability when training large datasets on the GPU
- Free unused GPU memory when any GPU context is destroyed instead of when all GPU contexts are destroyed.
- Fix bug resulting in the unnecessary creation and destruction of duplicate GPU kernel instances

Unreal
- Fix bug causing trainer to get stuck when a training client disconnects