Commit bedec8cd authored by Yuxin Wu's avatar Yuxin Wu

update docs formatting

parent f55d81f2
...@@ -29,71 +29,71 @@ You can overwrite any of the following methods to define a new callback: ...@@ -29,71 +29,71 @@ You can overwrite any of the following methods to define a new callback:
* `_setup_graph(self)` * `_setup_graph(self)`
Create any ops / tensors in the graph which you might need to use in the callback. Create any ops / tensors in the graph which you might need to use in the callback.
This method is to separate between "define" and "run", and also to This method is to separate between "define" and "run", and also to
avoid the common mistake to create ops inside avoid the common mistake to create ops inside
loops. All changes to the graph should be made in this method. loops. All changes to the graph should be made in this method.
To access ops which are already defined, To access ops which are already defined,
you can use TF methods such as you can use TF methods such as
[`graph.get_tensor_by_name`](https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name). [`graph.get_tensor_by_name`](https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name).
If you're using a `TowerTrainer` instance, more tools are available: If you're using a `TowerTrainer` instance, more tools are available:
* Use `self.trainer.tower_func.towers` to access the - Use `self.trainer.tower_func.towers` to access the
[tower handles](../modules/tfutils.html#tensorpack.tfutils.tower.TowerTensorHandles), [tower handles](../modules/tfutils.html#tensorpack.tfutils.tower.TowerTensorHandles),
and therefore the tensors in each tower. and therefore the tensors in each tower.
* [self.get_tensors_maybe_in_tower()](../modules/callbacks.html#tensorpack.callbacks.Callback.get_tensors_maybe_in_tower) - [self.get_tensors_maybe_in_tower()](../modules/callbacks.html#tensorpack.callbacks.Callback.get_tensors_maybe_in_tower)
is a helper function to access tensors in the first training tower. is a helper function to access tensors in the first training tower.
* [self.trainer.get_predictor()](../modules/train.html#tensorpack.train.TowerTrainer.get_predictor) - [self.trainer.get_predictor()](../modules/train.html#tensorpack.train.TowerTrainer.get_predictor)
is a helper function to create a callable under inference mode. is a helper function to create a callable under inference mode.
* `_before_train(self)` * `_before_train(self)`
Can be used to run some manual initialization of variables, or start some services for the training. Can be used to run some manual initialization of variables, or start some services for the training.
* `_after_train(self)` * `_after_train(self)`
Usually some finalization work. Usually some finalization work.
* `_before_epoch(self)`, `_after_epoch(self)` * `_before_epoch(self)`, `_after_epoch(self)`
Use them __only__ when you really need something to happen __immediately__ before/after an epoch. Use them __only__ when you really need something to happen __immediately__ before/after an epoch.
Otherwise, `_trigger_epoch` should be enough. Otherwise, `_trigger_epoch` should be enough.
* `_before_run(self, ctx)`, `_after_run(self, ctx, values)` * `_before_run(self, ctx)`, `_after_run(self, ctx, values)`
These are the equivalence of [tf.train.SessionRunHook](https://www.tensorflow.org/api_docs/python/tf/train/SessionRunHook). These are the equivalence of [tf.train.SessionRunHook](https://www.tensorflow.org/api_docs/python/tf/train/SessionRunHook).
Please refer to TensorFlow documentation for detailed API. Please refer to TensorFlow documentation for detailed API.
They are used to run extra ops / eval extra tensors / feed extra values __along with__ the actual training iterations. They are used to run extra ops / eval extra tensors / feed extra values __along with__ the actual training iterations.
Note the difference between running __along with__ an iteration and running after an iteration. Note the difference between running __along with__ an iteration and running after an iteration.
When you write When you write
```python ```python
def _before_run(self, _): def _before_run(self, _):
return tf.train.SessionRunArgs(fetches=my_op) return tf.train.SessionRunArgs(fetches=my_op)
``` ```
The training loops would become `sess.run([training_op, my_op])`. The training loops would become `sess.run([training_op, my_op])`.
This is different from `sess.run(training_op); sess.run(my_op);`, This is different from `sess.run(training_op); sess.run(my_op);`,
which is what you would get if you run the op in `_trigger_step`. which is what you would get if you run the op in `_trigger_step`.
* `_trigger_step(self)` * `_trigger_step(self)`
Do something (e.g. running ops, print stuff) after each step has finished. Do something (e.g. running ops, print stuff) after each step has finished.
Be careful to only do light work here because it could affect training speed. Be careful to only do light work here because it could affect training speed.
* `_trigger_epoch(self)` * `_trigger_epoch(self)`
Do something after each epoch has finished. Will call `self.trigger()` by default. Do something after each epoch has finished. Will call `self.trigger()` by default.
* `_trigger(self)` * `_trigger(self)`
Define something to do here without knowing how often it will get called. Define something to do here without knowing how often it will get called.
By default it will get called by `_trigger_epoch`, By default it will get called by `_trigger_epoch`,
but you can customize the scheduling of this method by but you can customize the scheduling of this method by
[`PeriodicTrigger`](../../modules/callbacks.html#tensorpack.callbacks.PeriodicTrigger), [`PeriodicTrigger`](../../modules/callbacks.html#tensorpack.callbacks.PeriodicTrigger),
to let this method run every k steps or every k epochs. to let this method run every k steps or every k epochs.
### What you can do in the callback ### What you can do in the callback
......
...@@ -6,8 +6,8 @@ ...@@ -6,8 +6,8 @@
There are two ways to do inference during training. There are two ways to do inference during training.
1. The easiest way is to write a callback, and use 1. The easiest way is to write a callback, and use
` self.trainer.get_predictor()` to get a callable under inference mode. `self.trainer.get_predictor()` to get a callable under inference mode.
See [Write a Callback](extend/callback.html) See [Write a Callback](extend/callback.html).
2. If your inference follows the paradigm of: 2. If your inference follows the paradigm of:
"fetch some tensors for each input, and aggregate the results". "fetch some tensors for each input, and aggregate the results".
...@@ -21,10 +21,10 @@ There are two ways to do inference during training. ...@@ -21,10 +21,10 @@ There are two ways to do inference during training.
Tensorpack doesn't care what happened after training. Tensorpack doesn't care what happened after training.
It saves models to standard checkpoint format, plus a metagraph protobuf file. It saves models to standard checkpoint format, plus a metagraph protobuf file.
They are sufficient to use with whatever deployment methods TensorFlow supports. They are sufficient to use with whatever deployment methods TensorFlow supports.
But you'll need to read the docs and do it on your own. But you'll need to read TF docs and do it on your own.
The only thing tensorpack has is `OfflinePredictor`, The only thing tensorpack has is `OfflinePredictor`,
a simple function to build the graph and a callable for you. a simple function to build the graph and a callable for you.
It only runs inference on Python data, therefore may not be the best way.
It is mainly for quick demo purpose. It is mainly for quick demo purpose.
It only runs inference on Python data, therefore may not be the most efficient way.
Check out some examples for the usage. Check out some examples for the usage.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment