Commit a759dbb0 authored by Yuxin Wu's avatar Yuxin Wu

update docs

parent a9b89567
...@@ -22,7 +22,7 @@ Feature Requests: ...@@ -22,7 +22,7 @@ Feature Requests:
+ You can implement a lot of features by extending tensorpack + You can implement a lot of features by extending tensorpack
(See http://tensorpack.readthedocs.io/en/latest/tutorial/index.html#extend-tensorpack). (See http://tensorpack.readthedocs.io/en/latest/tutorial/index.html#extend-tensorpack).
It does not have to be added to tensorpack unless you have a good reason. It does not have to be added to tensorpack unless you have a good reason.
+ We don't take example requests. + We don't take feature requests for examples.
Usage Questions: Usage Questions:
......
...@@ -6,7 +6,8 @@ ...@@ -6,7 +6,8 @@
There are two ways to do inference during training. There are two ways to do inference during training.
1. The easiest way is to write a callback, and use 1. The easiest way is to write a callback, and use
`self.trainer.get_predictor()` to get a callable under inference mode. [self.trainer.get_predictor()](../modules/modules/train.html#tensorpack.train.TowerTrainer.get_predictor)
to get a callable under inference mode.
See [Write a Callback](extend/callback.html). See [Write a Callback](extend/callback.html).
2. If your inference follows the paradigm of: 2. If your inference follows the paradigm of:
...@@ -29,15 +30,16 @@ Please note that, the metagraph saved during training is the training graph. ...@@ -29,15 +30,16 @@ Please note that, the metagraph saved during training is the training graph.
But sometimes you need a different one for inference. But sometimes you need a different one for inference.
For example, you may need a different data layout for CPU inference, For example, you may need a different data layout for CPU inference,
or you may need placeholders in the inference graph, or the training graph contains multi-GPU replication or you may need placeholders in the inference graph, or the training graph contains multi-GPU replication
which you want to remove. which you want to remove. In fact, directly import a huge training metagraph is usually not a good idea for deployment.
In this case, you can always construct a new graph by simply: In this case, you can always construct a new graph by simply:
```python ```python
a, b = tf.placeholder(...), tf.placeholder(...) a, b = tf.placeholder(...), tf.placeholder(...)
# call symbolic functions on a, b # call symbolic functions on a, b
``` ```
The only tool tensorpack has for after-training inference is `OfflinePredictor`, The only tool tensorpack has for after-training inference is [OfflinePredictor](../modules/predict.html#tensorpack.predict.OfflinePredictor),
a simple function to build the graph and return a callable for you. a simple function to build the graph and return a callable for you.
It is mainly for quick demo purposes. It is mainly for quick demo purposes.
It only runs inference on Python data, therefore may not be the most efficient way. It only runs inference on numpy arrays, therefore may not be the most efficient way.
Check out some examples for its usage. Check out examples and docs for its usage.
...@@ -4,6 +4,7 @@ ...@@ -4,6 +4,7 @@
# Author: Yuxin Wu <ppwwyyxxc@gmail.com> # Author: Yuxin Wu <ppwwyyxxc@gmail.com>
from tensorpack import * from tensorpack import *
from tensorpack.tfutils import get_tf_version_number
from tensorpack.tfutils.summary import add_moving_summary from tensorpack.tfutils.summary import add_moving_summary
from tensorpack.tfutils.scope_utils import auto_reuse_variable_scope from tensorpack.tfutils.scope_utils import auto_reuse_variable_scope
import tensorflow as tf import tensorflow as tf
...@@ -83,6 +84,7 @@ class Model(DCGAN.Model): ...@@ -83,6 +84,7 @@ class Model(DCGAN.Model):
if __name__ == '__main__': if __name__ == '__main__':
assert get_tf_version_number() >= 1.4
args = DCGAN.get_args(default_batch=64, default_z_dim=128) args = DCGAN.get_args(default_batch=64, default_z_dim=128)
M = Model(shape=args.final_size, batch=args.batch, z_dim=args.z_dim) M = Model(shape=args.final_size, batch=args.batch, z_dim=args.z_dim)
if args.sample: if args.sample:
......
...@@ -40,7 +40,7 @@ class PredictConfig(object): ...@@ -40,7 +40,7 @@ class PredictConfig(object):
tensors can be any computable tensor in the graph. tensors can be any computable tensor in the graph.
return_input (bool): same as in :attr:`PredictorBase.return_input`. return_input (bool): same as in :attr:`PredictorBase.return_input`.
create_graph (bool): create a new graph, or use the default graph create_graph (bool): create a new graph, or use the default graph
when then predictor is first initialized. when predictor is first initialized.
You need to set either `model`, or `inputs_desc` plus `tower_func`. You need to set either `model`, or `inputs_desc` plus `tower_func`.
""" """
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment