Commit f131dfff authored by Yuxin Wu's avatar Yuxin Wu

shorten links in doc

parent 83f3d66d
......@@ -129,7 +129,7 @@ add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
add_module_names = True
# TODO use module name, but remove `tensorpack.` ?
# 'tensorpack.' prefix was removed by js
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
......@@ -380,14 +380,19 @@ def autodoc_skip_member(app, what, name, obj, skip, options):
return True
return None
def url_resolver(url):
if '.html' not in url:
return "https://github.com/ppwwyyxx/tensorpack/blob/master/" + url
else:
return "http://tensorpack.readthedocs.io/en/latest/" + url
def setup(app):
from recommonmark.transform import AutoStructify
app.connect('autodoc-process-signature', process_signature)
app.connect('autodoc-skip-member', autodoc_skip_member)
app.add_config_value(
'recommonmark_config',
{'url_resolver': lambda url: \
"https://github.com/ppwwyyxx/tensorpack/blob/master/" + url,
{'url_resolver': url_resolver,
'auto_toc_tree_section': 'Contents',
'enable_math': True,
'enable_inline_math': True,
......
......@@ -15,14 +15,6 @@ tensorpack.tfutils.collection module
:undoc-members:
:show-inheritance:
tensorpack.tfutils.distributions module
---------------------------------------
.. automodule:: tensorpack.tfutils.distributions
:members:
:undoc-members:
:show-inheritance:
tensorpack.tfutils.gradproc module
------------------------------------
......
......@@ -43,9 +43,9 @@ for simple instructions on writing a DataFlow.
1. It's easy: write everything in pure Python, and reuse existing utilities.
On the contrary, writing data loaders in TF operators or other frameworks is usually painful.
2. It's fast: see [Efficient DataFlow](http://tensorpack.readthedocs.io/en/latest/tutorial/efficient-dataflow.html)
2. It's fast: see [Efficient DataFlow](efficient-dataflow.html)
on how to build a fast DataFlow with parallel prefetching.
If you're using DataFlow with tensorpack, also see [Input Pipeline tutorial](http://tensorpack.readthedocs.io/en/latest/tutorial/input-source.html)
If you're using DataFlow with tensorpack, also see [Input Pipeline tutorial](input-source.html)
on how tensorpack further accelerates data loading in the graph.
Nevertheless, tensorpack support data loading with native TF operators as well.
......
......@@ -18,7 +18,7 @@ We will need to reach a speed of, roughly 1k ~ 2k images per second, to keep GPU
Some things to know before reading:
1. Having a fast Python generator **alone** may or may not improve your overall training speed.
You need mechanisms to hide the latency of **all** preprocessing stages, as mentioned in the
[previous tutorial](http://tensorpack.readthedocs.io/en/latest/tutorial/input-source.html).
[previous tutorial](input-source.html).
2. Reading training set and validation set are different.
In training it's OK to reorder, regroup, or even duplicate some datapoints, as long as the
data distribution roughly stays the same.
......
......@@ -3,7 +3,7 @@
The first thing to note: __you never have to write an augmentor__.
An augmentor is a part of the DataFlow, so you can always
[write a DataFlow](http://tensorpack.readthedocs.io/en/latest/tutorial/extend/dataflow.html)
[write a DataFlow](dataflow.html)
to do whatever operations to your data, rather than writing an augmentor.
Augmentors just sometimes make things easier.
......
......@@ -83,7 +83,7 @@ Do something after each epoch has finished. Will call `self.trigger()` by defaul
Define something to do here without knowing how often it will get called.
By default it will get called by `_trigger_epoch`,
but you can customize the scheduling of this method by
[`PeriodicTrigger`](http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.PeriodicTrigger),
[`PeriodicTrigger`](../../modules/callbacks.html#tensorpack.callbacks.PeriodicTrigger),
to let this method run every k steps or every k epochs.
### What you can do in the callback
......@@ -92,6 +92,6 @@ to let this method run every k steps or every k epochs.
To create a callable function under inference mode, use `self.trainer.get_predictor`.
* Write stuff to the monitor backend, by `self.trainer.monitors.put_xxx`.
The monitors might direct your events to TensorFlow events file, JSON file, stdout, etc.
You can get history monitor data as well. See the docs for [Monitors](http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.Monitors)
* Access the current status of training, such as `epoch_num`, `global_step`. See [here](http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.Callback)
You can get history monitor data as well. See the docs for [Monitors](../../modules/callbacks.html#tensorpack.callbacks.Monitors)
* Access the current status of training, such as `epoch_num`, `global_step`. See [here](../../modules/callbacks.html#tensorpack.callbacks.Callback)
* Anything else that can be done with plain python.
......@@ -28,6 +28,6 @@ Optionally, you can implement the following two methods:
With a "low-level" DataFlow defined like above, you can then compose it with existing modules (e.g. batching, prefetching, ...).
DataFlow implementations for several well-known datasets are provided in the
[dataflow.dataset](http://tensorpack.readthedocs.io/en/latest/modules/dataflow.dataset.html)
[dataflow.dataset](../../modules/dataflow.dataset.html)
module, you can take them as a reference.
......@@ -24,4 +24,4 @@ But you can customize it by using the base `Trainer` class.
2. Subclass `Trainer` and override the `run_step()` method. This way you can do something more than running an op.
There are several different [GAN trainers](../../examples/GAN/GAN.py) for reference.
The implementation of [SimpleTrainer](https://github.com/ppwwyyxx/tensorpack/blob/master/tensorpack/train/simple.py) may also be helpful.
The implementation of [SimpleTrainer](../../tensorpack/train/simple.py) may also be helpful.
......@@ -7,7 +7,7 @@ The library tries to __support__ everything, but it could not really __include__
The interface tries to be flexible enough so you can put any XYZ on it.
You can either implement them under the interface or simply wrap some existing Python code.
See [Extend Tensorpack](http://tensorpack.readthedocs.io/en/latest/tutorial/index.html#extend-tensorpack)
See [Extend Tensorpack](index.html#extend-tensorpack)
for more details.
If you think:
......@@ -50,10 +50,10 @@ Unmatched variables on both sides will be printed as a warning.
1. You can simply use `tf.stop_gradient` in your model code in some situations (e.g. to freeze first several layers).
2. [varreplace.freeze_variables](http://tensorpack.readthedocs.io/en/latest/modules/tfutils.html#tensorpack.tfutils.varreplace.freeze_variables) can wrap some variables with `tf.stop_gradient`.
2. [varreplace.freeze_variables](../modules/tfutils.html#tensorpack.tfutils.varreplace.freeze_variables) can wrap some variables with `tf.stop_gradient`.
3. [ScaleGradient](http://tensorpack.readthedocs.io/en/latest/modules/tfutils.html#tensorpack.tfutils.gradproc.ScaleGradient) can be used to set the gradients of some variables to 0.
3. [ScaleGradient](../modules/tfutils.html#tensorpack.tfutils.gradproc.ScaleGradient) can be used to set the gradients of some variables to 0.
Note that the above methods only prevent variables being updated by SGD.
Some variables may be updated by other means,
e.g., BatchNorm statistics are updated through the `UPDATE_OPS` collection and the [RunUpdateOps](http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.RunUpdateOps) callback.
e.g., BatchNorm statistics are updated through the `UPDATE_OPS` collection and the [RunUpdateOps](../modules/callbacks.html#tensorpack.callbacks.RunUpdateOps) callback.
......@@ -52,5 +52,5 @@ if you need to create and variables.
When you need to deal with complicated graph, it may be easier to build the graph manually.
You are free to do so as long as you tell the trainer what to do in each step.
Check out [Write a Trainer](http://tensorpack.readthedocs.io/en/latest/tutorial/extend/trainer.html)
Check out [Write a Trainer](extend/trainer.html)
for using a custom graph with trainer.
......@@ -49,7 +49,7 @@ The benefits of using TensorFlow ops are:
above figure is hidden, it makes no difference at all.
For most types of problems, up to the scale of multi-GPU ImageNet training,
Python can offer enough speed if you use a fast library (e.g. `tensorpack.dataflow`).
See the [Efficient DataFlow](http://tensorpack.readthedocs.io/en/latest/tutorial/efficient-dataflow.html) tutorial
See the [Efficient DataFlow](efficient-dataflow.html) tutorial
on how to build a fast Python reader with DataFlow.
* No "Copy to TF" (i.e. `feed_dict`) stage.
......@@ -78,7 +78,7 @@ For example,
When you set `TrainConfig(dataflow=)`, tensorpack trainers automatically adds proper prefetching for you.
You can also use `TrainConfig(data=)` option to use a customized `InputSource`.
In case you want to use TF ops rather than a DataFlow, you can use `TensorInput` as the `InputSource`
(See the [PTB example](https://github.com/ppwwyyxx/tensorpack/tree/master/examples/PennTreebank)).
(See the [PTB example](../../tensorpack/tree/master/examples/PennTreebank)).
## Figure out the Bottleneck
......
......@@ -3,12 +3,12 @@
While you can use other symbolic libraries,
tensorpack also contains a small collection of common model primitives,
such as conv/deconv, fc, batch normalization, pooling layers, and some custom loss functions.
such as conv/deconv, fc, bn, pooling layers.
Using the tensorpack implementations, you can also benefit from `argscope` and `LinearWrap` to
simplify the code.
Note that the layers were written because there are no other alternatives back at that time.
In the future we may shift to `tf.layers` because they will be better maintained.
Note that these layers were written because there are no other alternatives back at that time.
In the future we may shift the implementation to `tf.layers` because they will be better maintained.
### argscope and LinearWrap
`argscope` gives you a context with default arguments.
......@@ -44,7 +44,7 @@ l = FullyConnected('fc1', l, 10, nl=tf.identity)
### Use Models outside Tensorpack
You can use tensorpack models alone as a simple symbolic function library.
To do this, just enter a [TowerContext](http://tensorpack.readthedocs.io/en/latest/modules/tfutils.html#tensorpack.tfutils.TowerContext)
To do this, just enter a [TowerContext](../modules/tfutils.html#tensorpack.tfutils.TowerContext)
when you define your model:
```python
with TowerContext('', is_training=True):
......
......@@ -16,7 +16,7 @@ Users or derived trainers should implement __what the iteration is__.
2. Trainer assumes the existence of __"epoch"__, i.e. that the iterations run in double for-loops.
But it doesn't need to be a full pass of your dataset, ``steps_per_epoch`` can be any number you set
and it only affects the [schedule of callbacks](http://tensorpack.readthedocs.io/en/latest/tutorial/extend/callback.html).
and it only affects the [schedule of callbacks](extend/callback.html).
In other words, an "epoch" is the __default period__ to run callbacks (validation, summary, checkpoint, etc.).
......@@ -39,13 +39,13 @@ config = TrainConfig(
# start training:
SomeTrainer(config, other_arguments).train()
# start multi-GPU training with a synchronous update:
# start multi-GPU training with synchronous update:
# SyncMultiGPUTrainerParameterServer(config).train()
```
When you set the DataFlow (rather than the InputSource) in the config,
tensorpack trainers automatically pick up certain prefetch mechanism,
which will run faster than a naive `sess.run(..., feed_dict={...})`.
tensorpack trainers automatically adopt certain prefetch mechanism, as mentioned
in the [Input Pipeline](input-source.html) tutorial.
You can set the InputSource instead, to customize this behavior.
Existing multi-GPU trainers include the logic of data-parallel training.
......@@ -59,5 +59,5 @@ would be ``(batch size of InputSource/DataFlow) * #GPU``.
### Custom Trainers
You can easily write a trainer for other types of training.
See [Write a Trainer](http://tensorpack.readthedocs.io/en/latest/tutorial/extend/trainer.html).
See [Write a Trainer](extend/trainer.html).
......@@ -13,8 +13,9 @@ from .common import get_op_tensor_name
from .varmanip import (SessionUpdate, get_savename_from_varname,
is_training_name, get_checkpoint_path)
__all__ = ['SessionInit', 'SaverRestore', 'SaverRestoreRelaxed',
'ParamRestore', 'DictRestore', 'ChainInit',
__all__ = ['SessionInit', 'ChainInit',
'SaverRestore', 'SaverRestoreRelaxed',
'ParamRestore', 'DictRestore',
'JustCurrentSession', 'get_model_loader', 'TryResumeTraining']
......@@ -276,7 +277,7 @@ def get_model_loader(filename):
def TryResumeTraining():
"""
Load latest checkpoint from LOG_DIR, if there is one.
Try loading latest checkpoint from ``logger.LOG_DIR``, only if there is one.
Returns:
SessInit: either a :class:`JustCurrentSession`, or a :class:`SaverRestore`.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment