Commit 3036e824 authored by Yuxin Wu's avatar Yuxin Wu

update docs

parent 88796373
......@@ -68,7 +68,7 @@ Dependencies:
+ Python 2.7 or 3.3+. Python 2.7 is supported until [it retires in 2020](https://pythonclock.org/).
+ Python bindings for OpenCV. (Optional, but required by a lot of features)
+ TensorFlow ≥ 1.3, < 2. (Optional, if you only want to use `tensorpack.dataflow` alone as a data processing library)
+ TensorFlow ≥ 1.3, < 2. (Not required if you only want to use `tensorpack.dataflow` alone as a data processing library)
```
pip install --upgrade git+https://github.com/tensorpack/tensorpack.git
# or add `--user` to install to user's local directories
......
......@@ -40,15 +40,15 @@ for inference demo after training.
It has functionailities to build the graph, load the checkpoint, and
return a callable for you for simple prediction. Refer to its docs for details.
OfflinePredictor is only for quick demo purposes.
It runs inference on numpy arrays, therefore may not be the most efficient way.
It also has very limited functionalities.
To use it, you need to provide your model, checkpoint, and define what are the
input & output tensors to infer with. You can obtain names of tensors by
`print()`, or assign a name to a tensor by `tf.identity(x, name=)`.
A simple example of how it works:
```python
pred_config = PredictConfig(
session_init=get_model_loader(model_path),
model=YourModel(),
session_init=get_model_loader(model_path),
input_names=['input1', 'input2'], # tensor names in the graph, or name of the declared inputs
output_names=['output1', 'output2']) # tensor names in the graph
predictor = OfflinePredictor(pred_config)
......@@ -60,6 +60,10 @@ e.g., use NHWC format, support encoded image format, etc.
You can make these changes inside the `model` or `tower_func` in your `PredictConfig`.
The example in [examples/basics/export-model.py](../examples/basics/export-model.py) demonstrates such an altered inference graph.
OfflinePredictor is only for quick demo purposes.
It runs inference on numpy arrays, therefore may not be the most efficient way.
It also has very limited functionalities.
### Exporter
In addition to the standard checkpoint format tensorpack saved for you during training,
......
......@@ -37,12 +37,11 @@ def Conv2D(
activity_regularizer=None,
split=1):
"""
A wrapper around `tf.layers.Conv2D`.
Some differences to maintain backward-compatibility:
Similar to `tf.layers.Conv2D`, but with some differences:
1. Default kernel initializer is variance_scaling_initializer(2.0).
2. Default padding is 'same'.
3. Support 'split' argument to do group conv. Note that this is not efficient.
3. Support 'split' argument to do group convolution.
Variable Names:
......
......@@ -4,6 +4,7 @@
import tensorflow as tf
from ..utils.develop import log_deprecated
from ..compat import tfv1
from .batch_norm import BatchNorm
from .common import VariableHolder, layer_register
......@@ -36,7 +37,7 @@ def Maxout(x, num_unit):
@layer_register()
def PReLU(x, init=0.001, name='output'):
def PReLU(x, init=0.001, name=None):
"""
Parameterized ReLU as in the paper `Delving Deep into Rectifiers: Surpassing
Human-Level Performance on ImageNet Classification
......@@ -45,16 +46,18 @@ def PReLU(x, init=0.001, name='output'):
Args:
x (tf.Tensor): input
init (float): initial value for the learnable slope.
name (str): name of the output.
name (str): deprecated argument. Don't use
Variable Names:
* ``alpha``: learnable slope.
"""
if name is not None:
log_deprecated("PReLU(name=...) is deprecated! The output tensor will be named `output`.")
init = tfv1.constant_initializer(init)
alpha = tfv1.get_variable('alpha', [], initializer=init)
x = ((1 + alpha) * x + (1 - alpha) * tf.abs(x))
ret = tf.multiply(x, 0.5, name=name)
ret = tf.multiply(x, 0.5, name=name or None)
ret.variables = VariableHolder(alpha=alpha)
return ret
......@@ -64,7 +67,14 @@ def PReLU(x, init=0.001, name='output'):
def BNReLU(x, name=None):
"""
A shorthand of BatchNormalization + ReLU.
Args:
x (tf.Tensor): the input
name: deprecated, don't use.
"""
if name is not None:
log_deprecated("BNReLU(name=...) is deprecated! The output tensor will be named `output`.")
x = BatchNorm('bn', x)
x = tf.nn.relu(x, name=name)
return x
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment