Commit defa5c61 authored by Yuxin Wu's avatar Yuxin Wu

fix links in docs (fix #1179)

parent c04e0e11
......@@ -34,7 +34,11 @@ See [tutorials and documentations](http://tensorpack.readthedocs.io/tutorial/ind
## Examples:
We refuse toy examples. We refuse low-quality implementations.
We refuse toy examples.
Instead of showing tiny CNNs trained on MNIST/Cifar10,
we provide training scripts that reproduce well-known papers.
We refuse low-quality implementations.
Unlike most open source repos which only __implement__ papers,
[Tensorpack examples](examples) faithfully __reproduce__ papers,
demonstrating its __flexibility__ for actual research.
......
......@@ -51,7 +51,7 @@ People often think they should use `tf.data` because it's fast.
above figure is hidden, __faster reader brings no gains to overall throughput__.
For most types of problems, up to the scale of multi-GPU ImageNet training,
Python can offer enough speed if you use a fast library (e.g. `tensorpack.dataflow`).
See the [Efficient DataFlow](efficient-dataflow.html) tutorial on how to build a fast Python reader with DataFlow.
See the [Efficient DataFlow](/tutorial/efficient-dataflow.html) tutorial on how to build a fast Python reader with DataFlow.
### TensorFlow Reader: Cons
The disadvantage of TF reader is obvious and it's huge: it's __too complicated__.
......
......@@ -10,7 +10,7 @@ Tensorpack follows the "define-and-run" paradigm. Therefore a training script ha
The goal of this step is to define "what to run" in later training steps,
and it can happen __either inside or outside__ tensorpack trainer.
2. __Run__: Train the model (the [Trainer.train() method](../modules/train.html#tensorpack.train.Trainer.train)):
2. __Run__: Train the model (the [Trainer.train() method](/modules/train.html#tensorpack.train.Trainer.train)):
1. Setup callbacks/monitors.
2. Finalize graph, initialize session.
......@@ -38,7 +38,7 @@ Users or derived trainers should implement __what the iteration is__.
2. Trainer assumes the existence of __"epoch"__, i.e. that the iterations run in double for-loops.
But `steps_per_epoch` can be any number you set
and it only affects the [schedule of callbacks](extend/callback.html).
and it only affects the [schedule of callbacks](callback.html).
In other words, an "epoch" in tensorpack is the __default period to run callbacks__ (validation, summary, checkpoint, etc.).
......@@ -53,8 +53,8 @@ These trainers will take care of step 1 (define the graph), with the following a
3. A function which takes input tensors and returns the cost.
4. A function which returns an optimizer.
These are documented in [SingleCostTrainer.setup_graph](../modules/train.html#tensorpack.train.SingleCostTrainer.setup_graph).
In practice you'll not use this method directly, but use [high-level interface](../tutorial/training-interface.html#with-modeldesc-and-trainconfig) instead.
These are documented in [SingleCostTrainer.setup_graph](/modules/train.html#tensorpack.train.SingleCostTrainer.setup_graph).
In practice you'll not use this method directly, but use [high-level interface](/tutorial/training-interface.html#with-modeldesc-and-trainconfig) instead.
### Write a Trainer
......
......@@ -9,7 +9,11 @@ Github is full of deep learning code that "implements" but does not "reproduce"
methods, and you'll not know whether the implementation is actually correct.
See [Unawareness of Deep Learning Mistakes](https://medium.com/@ppwwyyxx/unawareness-of-deep-learning-mistakes-d5b5774da0ba).
We refuse toy examples. We refuse low-quality implementations.
We refuse toy examples.
Instead of showing tiny CNNs trained on MNIST/Cifar10,
we provide training scripts that reproduce well-known papers.
We refuse low-quality implementations.
Unlike most open source repos which only __implement__ methods,
[Tensorpack examples](examples) faithfully __reproduce__
experiments and performance in the paper,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment