Symbolic functions should be nothing new to you, and writing a simple symbolic function is nothing special in tensorpack.
But you can make a symbolic function become a "layer" by following some very simple rules, and then gain benefits from the framework.
Take a look at the [Convolutional Layer](../tensorpack/models/conv2d.py#L14) implementation for an example of how to define a layer:
```python
@layer_register()
defConv2D(x,out_channel,kernel_shape,
padding='SAME',stride=1,
W_init=None,b_init=None,
nl=tf.nn.relu,split=1,use_bias=True):
```
Basically, a layer is a symbolic function with the following rules:
+ It is decorated by `@layer_register`.
+ The first argument is its "input". It must be a tensor or a list of tensors.
+ It returns either a tensor or a list of tensors as its "output".
By making a symbolic function a "layer", the following things will happen:
+ You will call the function with a scope argument, e.g. `Conv2D('conv0', x, 32, 3)`.
Everything happening in this function will be under the variable scope 'conv0'. You can register
the layer with `use_scope=False` to disable this feature.
+ Static shapes of input/output will be logged.
+`argscope` will then work for all its arguments except the first one (input).
+ It will work with `LinearWrap`: you can use it if the output of a previous layer is the input of a next layer.
Take a look at the [Inception example](../examples/Inception/inception-bn.py#L36) to see how a complicated model can be described with these primitives.
There are also a number of (non-layer) symbolic functions in the `tfutils.symbolic_functions` module.
There isn't a rule about what kind of symbolic functions should be made a layer -- they're quite
similar anyway. But in general I define the following kinds of symbolic functions as layers:
+ Functions which contain variables. A variable scope is almost always helpful for such function.
+ Functions which are commonly referred to as "layers", such as pooling. This make a model