Commit 6cb7e9c8 authored by Yuxin Wu's avatar Yuxin Wu

fix old links to the docs (fix #1016)

parent b79a9d3b
......@@ -3,7 +3,7 @@
Tensorpack is a neural network training interface based on TensorFlow.
[![Build Status](https://travis-ci.org/tensorpack/tensorpack.svg?branch=master)](https://travis-ci.org/tensorpack/tensorpack)
[![ReadTheDoc](https://readthedocs.org/projects/tensorpack/badge/?version=latest)](http://tensorpack.readthedocs.io/en/latest/index.html)
[![ReadTheDoc](https://readthedocs.org/projects/tensorpack/badge/?version=latest)](http://tensorpack.readthedocs.io)
[![Gitter chat](https://img.shields.io/badge/chat-on%20gitter-46bc99.svg)](https://gitter.im/tensorpack/users)
[![model-zoo](https://img.shields.io/badge/model-zoo-brightgreen.svg)](http://models.tensorpack.com)
## Features:
......
......@@ -279,7 +279,7 @@ Note that you can certainly use `tf.summary.scalar(self.cost)`, but then you'll
Let's summarize: we have a model and data.
The missing piece which stitches these parts together is the training protocol.
It is only a [configuration](http://tensorpack.readthedocs.io/en/latest/modules/tensorpack.train.html#tensorpack.train.TrainConfig)
It is only a [configuration](http://tensorpack.readthedocs.io/modules/tensorpack.train.html#tensorpack.train.TrainConfig)
For the dataflow, we already implemented `get_data` in the first part. Specifying the learning rate is done by
......
......@@ -93,13 +93,13 @@ Some choices are:
Data come from a DataFlow and get buffered on CPU by a TF queue.
3. [StagingInput](../modules/input_source.html#tensorpack.input_source.StagingInput):
Come from some other `InputSource`, then prefetched on GPU by a TF StagingArea.
4. [TFDatasetInput](http://tensorpack.readthedocs.io/en/latest/modules/input_source.html#tensorpack.input_source.TFDatasetInput)
4. [TFDatasetInput](../modules/input_source.html#tensorpack.input_source.TFDatasetInput)
Come from a `tf.data.Dataset`.
5. [dataflow_to_dataset](http://tensorpack.readthedocs.io/en/latest/modules/input_source.html#tensorpack.input_source.TFDatasetInput.dataflow_to_dataset)
5. [dataflow_to_dataset](../modules/input_source.html#tensorpack.input_source.TFDatasetInput.dataflow_to_dataset)
Come from a DataFlow, and then lfurther processed by utilities in `tf.data.Dataset`.
6. [TensorInput](../modules/input_source.html#tensorpack.input_source.TensorInput):
Come from some tensors you define (can be reading ops, for example).
7. [ZMQInput](http://tensorpack.readthedocs.io/en/latest/modules/input_source.html#tensorpack.input_source.ZMQInput)
7. [ZMQInput](../modules/input_source.html#tensorpack.input_source.ZMQInput)
Come from some ZeroMQ pipe, where the reading/preprocessing may happen in a different process or even a different machine.
Typically, we recommend using `DataFlow + QueueInput` as it's good for most use cases.
......
......@@ -3,7 +3,7 @@ ImageNet training code of ResNet, ShuffleNet, DoReFa-Net, AlexNet, Inception, VG
To train any of the models, just do `./{model}.py --data /path/to/ilsvrc`.
More options are available in `./{model}.py --help`.
Expected format of data directory is described in [docs](http://tensorpack.readthedocs.io/en/latest/modules/dataflow.dataset.html#tensorpack.dataflow.dataset.ILSVRC12).
Expected format of data directory is described in [docs](http://tensorpack.readthedocs.io/modules/dataflow.dataset.html#tensorpack.dataflow.dataset.ILSVRC12).
Some pretrained models can be downloaded at [tensorpack model zoo](http://models.tensorpack.com/).
### ShuffleNet
......
......@@ -102,7 +102,7 @@ def get_imagenet_dataflow(
Returns: A DataFlow which produces BGR images and labels.
See explanations in the tutorial:
http://tensorpack.readthedocs.io/en/latest/tutorial/efficient-dataflow.html
http://tensorpack.readthedocs.io/tutorial/efficient-dataflow.html
"""
assert name in ['train', 'val', 'test']
isTrain = name == 'train'
......
......@@ -25,7 +25,7 @@ baseline and they actually cannot beat this ResNet recipe.
| ResNet152 | 5.78% | 21.51% | [:arrow_down:](http://models.tensorpack.com/ResNet/ImageNet-ResNet152.npz) |
To reproduce the above results,
first decompress ImageNet data into [this structure](http://tensorpack.readthedocs.io/en/latest/modules/dataflow.dataset.html#tensorpack.dataflow.dataset.ILSVRC12), then:
first decompress ImageNet data into [this structure](http://tensorpack.readthedocs.io/modules/dataflow.dataset.html#tensorpack.dataflow.dataset.ILSVRC12), then:
```bash
./imagenet-resnet.py --data /path/to/original/ILSVRC -d 50 [--mode resnet/preact/se] --batch 256
# See ./imagenet-resnet.py -h for other options.
......@@ -35,7 +35,7 @@ You should be able to see good GPU utilization (95%~99%), if your data is fast e
With batch=64x8, it can finish 100 epochs in 16 hours on AWS p3.16xlarge (8 V100s).
The default data pipeline is probably OK for machines with SSD & 20 CPU cores.
See the [tutorial](http://tensorpack.readthedocs.io/en/latest/tutorial/efficient-dataflow.html) on other options to speed up your data.
See the [tutorial](http://tensorpack.readthedocs.io/tutorial/efficient-dataflow.html) on other options to speed up your data.
![imagenet](imagenet-resnet.png)
......
......@@ -14,7 +14,7 @@ __all__ = ['Callback', 'ProxyCallback', 'CallbackFactory']
class Callback(object):
""" Base class for all callbacks. See
`Write a Callback
<http://tensorpack.readthedocs.io/en/latest/tutorial/extend/callback.html>`_
<http://tensorpack.readthedocs.io/tutorial/extend/callback.html>`_
for more detailed explanation of the callback methods.
Attributes:
......
......@@ -90,7 +90,7 @@ class RemoteDataZMQ(DataFlow):
Produce data from ZMQ PULL socket(s).
It is the receiver-side counterpart of :func:`send_dataflow_zmq`, which uses :mod:`tensorpack.utils.serialize`
for serialization.
See http://tensorpack.readthedocs.io/en/latest/tutorial/efficient-dataflow.html#distributed-dataflow
See http://tensorpack.readthedocs.io/tutorial/efficient-dataflow.html#distributed-dataflow
Attributes:
cnt1, cnt2 (int): number of data points received from addr1 and addr2
......
......@@ -193,7 +193,7 @@ class SingleCostTrainer(TowerTrainer):
Note:
`get_cost_fn` will be part of the tower function.
It must follows the `rules of tower function.
<http://tensorpack.readthedocs.io/en/latest/tutorial/trainer.html#tower-trainer>`_.
<http://tensorpack.readthedocs.io/tutorial/trainer.html#tower-trainer>`_.
"""
get_cost_fn = TowerFuncWrapper(get_cost_fn, inputs_desc)
get_opt_fn = memoized(get_opt_fn)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment