The following guide introduces some core concepts of TensorPack. In contrast to several other libraries TensorPack contains of several modules to build complex deep learning algorithms and train models with high accuracy and high speed.
### Layers and Architectures
The library also contains several pre-implemented neural network modules and layers:
- Convolution, Deconvolution
- FullyConnected
- nonlinearities such as ReLU, leakyReLU, tanh and sigmoid
- pooling operations
- regularization operations
- batchnorm
We also support of tfSlim out-of-the box. A LeNet architecture for MNIST would look like
````python
logits=(LinearWrap(image)# the starting brace is only for line-breaking
You should build your model within the ModelDesc-class.
### Training
Given TensorFlow's optimizers this library provides several training protocols even for efficient multi-GPU environments. There is support for single GPU, training on one machine with multiple GPUs (synchron or asyncron), training of Generative Adversarial networks and reinforcement learning.
A primitive has the same interface as a tensorflow symbolic function: it takes a symbolic input `x` with
some parameters, and return some symbolic outputs.
`@layer_register()` will make this symbolic function become a `layer`, with the following benefits:
Basically, a layer is a symbolic function with the following rules:
+A variable scope for everything happening in this function.
+Auto-inferred input/output shapes can be logged to terminal.
+Work with `argscope` to define default arguments in a simple way.
+It is decorated by `@layer_register`.
+The first argument is its "input". It must be a tensor or a list of tensors.
+It returns either a tensor or a list of tensors as its "output".
Some convention when working with a primitive defined under `@layer_register()`:
+ The input must be the first argument in the signature so that logging will know. It can be either a Tensor or a list of Tensor.
+ When called, the first argument should be the name scope and the second be the input.
By making a symbolic function a "layer", the following thing will happen:
+ You will call the function with a scope argument, e.g. `Conv2D('conv0', x, 32, 3)`.
Everything happening in this function will be under the variable scope 'conv0'.
+ Static shapes of input/output will be logged.
+ It will then work with `argscope` to easily define default arguments. `argscope` will work for all
the arguments except the input.
+ It will work with `LinearWrap` if the output of the previous layer matches the input of the next layer.
Take a look at the [Inception example](../../examples/Inception/inception-bn.py#L36) to see how a complicated model can be described with these primitives.
Take a look at the [Inception example](../examples/Inception/inception-bn.py#L36) to see how a complicated model can be described with these primitives.