Commit 09942e47 authored by Yuxin Wu's avatar Yuxin Wu

fix link in docs

parent 108c9557
......@@ -373,6 +373,7 @@ def autodoc_skip_member(app, what, name, obj, skip, options):
'StagingInputWrapper',
'StepTensorPrinter',
'set_tower_func',
'TryResumeTraining',
'guided_relu', 'saliency_map', 'get_scalar_var',
'prediction_incorrect', 'huber_loss',
......
......@@ -40,11 +40,11 @@ You can overwrite any of the following methods to define a new callback:
If you're using a `TowerTrainer` instance, more tools are available:
- Use `self.trainer.tower_func.towers` to access the
[tower handles](../modules/tfutils.html#tensorpack.tfutils.tower.TowerTensorHandles),
[tower handles](../../modules/tfutils.html#tensorpack.tfutils.tower.TowerTensorHandles),
and therefore the tensors in each tower.
- [self.get_tensors_maybe_in_tower()](../modules/callbacks.html#tensorpack.callbacks.Callback.get_tensors_maybe_in_tower)
- [self.get_tensors_maybe_in_tower()](../../modules/callbacks.html#tensorpack.callbacks.Callback.get_tensors_maybe_in_tower)
is a helper function to access tensors in the first training tower.
- [self.trainer.get_predictor()](../modules/train.html#tensorpack.train.TowerTrainer.get_predictor)
- [self.trainer.get_predictor()](../../modules/train.html#tensorpack.train.TowerTrainer.get_predictor)
is a helper function to create a callable under inference mode.
* `_before_train(self)`
......
......@@ -2,7 +2,7 @@
### Write a DataFlow
There are several existing DataFlow, e.g. [ImageFromFile](../../modules/dataflow.html#tensorpack.dataflow.ImageFromFile),
[DataFromList](../../http://tensorpack.readthedocs.io/en/latest/modules/dataflow.html#tensorpack.dataflow.DataFromList),
[DataFromList](../../modules/dataflow.html#tensorpack.dataflow.DataFromList),
which you can use if your data format is simple.
In general, you probably need to write a source DataFlow to produce data for your task,
and then compose it with existing modules (e.g. mapping, batching, prefetching, ...).
......
......@@ -20,11 +20,11 @@ Then it is a good time to open an issue.
1. Learn `tf.Print`.
2. Know [DumpTensors](http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.DumpTensors[]),
[ProcessTensors](http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.ProcessTensors) callbacks.
2. Know [DumpTensors](../modules/callbacks.html#tensorpack.callbacks.DumpTensors[]),
[ProcessTensors](../modules/callbacks.html#tensorpack.callbacks.ProcessTensors) callbacks.
And it's also easy to write your own version of them.
3. The [ProgressBar](http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.ProgressBar)
3. The [ProgressBar](../modules/callbacks.html#tensorpack.callbacks.ProgressBar)
callback can print some scalar statistics, though not enabled by default.
## How to freeze some variables in training
......
......@@ -20,11 +20,15 @@ It can dump the model to a `var-name: value` dict saved in npy/npz format.
Model loading (in either training or testing) is through the `session_init` interface.
Currently there are two ways a session can be restored:
`session_init=SaverRestore(...)` which restores a
TF checkpoint, or `session_init=DictRestore(...)` which restores a dict.
(`get_model_loader` is a small helper to decide which one to use from a file name.)
[session_init=SaverRestore(...)](../modules/tfutils.html#tensorpack.tfutils.sessinit.SaverRestore)
which restores a TF checkpoint,
or [session_init=DictRestore(...)](../modules/tfutils.html#tensorpack.tfutils.sessinit.DictRestore) which restores a dict
([get_model_loader](../modules/tfutils.html#tensorpack.tfutils.sessinit.get_model_loader)
is a small helper to decide which one to use from a file name).
To load multiple models, use [ChainInit](../modules/tfutils.html#tensorpack.tfutils.sessinit.ChainInit)
Variable restoring is completely based on name match between
Variable restoring is completely based on __name match__ between
variables in the current graph and variables in the `session_init` initializer.
Variables that appear in only one side will be printed as warning.
......
......@@ -34,7 +34,7 @@ The function needs to follow some conventions:
### MultiGPU Trainers
For data-parallel multi-GPU training, different [multi-GPU trainers](http://tensorpack.readthedocs.io/en/latest/modules/train.html)
For data-parallel multi-GPU training, different [multi-GPU trainers](../modules/train.html)
implement different parallel logic.
They take care of device placement, gradient averaging and synchronoization
in the efficient way and all reach the same performance as the
......
......@@ -264,6 +264,7 @@ def get_model_loader(filename):
def TryResumeTraining():
"""
Try loading latest checkpoint from ``logger.get_logger_dir()``, only if there is one.
Actually not very useful... better to write your own one.
Returns:
SessInit: either a :class:`JustCurrentSession`, or a :class:`SaverRestore`.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment