Commit defa5c61 authored by Yuxin Wu's avatar Yuxin Wu

fix links in docs (fix #1179)

parent c04e0e11
...@@ -34,7 +34,11 @@ See [tutorials and documentations](http://tensorpack.readthedocs.io/tutorial/ind ...@@ -34,7 +34,11 @@ See [tutorials and documentations](http://tensorpack.readthedocs.io/tutorial/ind
## Examples: ## Examples:
We refuse toy examples. We refuse low-quality implementations. We refuse toy examples.
Instead of showing tiny CNNs trained on MNIST/Cifar10,
we provide training scripts that reproduce well-known papers.
We refuse low-quality implementations.
Unlike most open source repos which only __implement__ papers, Unlike most open source repos which only __implement__ papers,
[Tensorpack examples](examples) faithfully __reproduce__ papers, [Tensorpack examples](examples) faithfully __reproduce__ papers,
demonstrating its __flexibility__ for actual research. demonstrating its __flexibility__ for actual research.
......
...@@ -51,7 +51,7 @@ People often think they should use `tf.data` because it's fast. ...@@ -51,7 +51,7 @@ People often think they should use `tf.data` because it's fast.
above figure is hidden, __faster reader brings no gains to overall throughput__. above figure is hidden, __faster reader brings no gains to overall throughput__.
For most types of problems, up to the scale of multi-GPU ImageNet training, For most types of problems, up to the scale of multi-GPU ImageNet training,
Python can offer enough speed if you use a fast library (e.g. `tensorpack.dataflow`). Python can offer enough speed if you use a fast library (e.g. `tensorpack.dataflow`).
See the [Efficient DataFlow](efficient-dataflow.html) tutorial on how to build a fast Python reader with DataFlow. See the [Efficient DataFlow](/tutorial/efficient-dataflow.html) tutorial on how to build a fast Python reader with DataFlow.
### TensorFlow Reader: Cons ### TensorFlow Reader: Cons
The disadvantage of TF reader is obvious and it's huge: it's __too complicated__. The disadvantage of TF reader is obvious and it's huge: it's __too complicated__.
......
...@@ -10,7 +10,7 @@ Tensorpack follows the "define-and-run" paradigm. Therefore a training script ha ...@@ -10,7 +10,7 @@ Tensorpack follows the "define-and-run" paradigm. Therefore a training script ha
The goal of this step is to define "what to run" in later training steps, The goal of this step is to define "what to run" in later training steps,
and it can happen __either inside or outside__ tensorpack trainer. and it can happen __either inside or outside__ tensorpack trainer.
2. __Run__: Train the model (the [Trainer.train() method](../modules/train.html#tensorpack.train.Trainer.train)): 2. __Run__: Train the model (the [Trainer.train() method](/modules/train.html#tensorpack.train.Trainer.train)):
1. Setup callbacks/monitors. 1. Setup callbacks/monitors.
2. Finalize graph, initialize session. 2. Finalize graph, initialize session.
...@@ -38,7 +38,7 @@ Users or derived trainers should implement __what the iteration is__. ...@@ -38,7 +38,7 @@ Users or derived trainers should implement __what the iteration is__.
2. Trainer assumes the existence of __"epoch"__, i.e. that the iterations run in double for-loops. 2. Trainer assumes the existence of __"epoch"__, i.e. that the iterations run in double for-loops.
But `steps_per_epoch` can be any number you set But `steps_per_epoch` can be any number you set
and it only affects the [schedule of callbacks](extend/callback.html). and it only affects the [schedule of callbacks](callback.html).
In other words, an "epoch" in tensorpack is the __default period to run callbacks__ (validation, summary, checkpoint, etc.). In other words, an "epoch" in tensorpack is the __default period to run callbacks__ (validation, summary, checkpoint, etc.).
...@@ -53,8 +53,8 @@ These trainers will take care of step 1 (define the graph), with the following a ...@@ -53,8 +53,8 @@ These trainers will take care of step 1 (define the graph), with the following a
3. A function which takes input tensors and returns the cost. 3. A function which takes input tensors and returns the cost.
4. A function which returns an optimizer. 4. A function which returns an optimizer.
These are documented in [SingleCostTrainer.setup_graph](../modules/train.html#tensorpack.train.SingleCostTrainer.setup_graph). These are documented in [SingleCostTrainer.setup_graph](/modules/train.html#tensorpack.train.SingleCostTrainer.setup_graph).
In practice you'll not use this method directly, but use [high-level interface](../tutorial/training-interface.html#with-modeldesc-and-trainconfig) instead. In practice you'll not use this method directly, but use [high-level interface](/tutorial/training-interface.html#with-modeldesc-and-trainconfig) instead.
### Write a Trainer ### Write a Trainer
...@@ -74,13 +74,13 @@ You will need to do two things for a new Trainer: ...@@ -74,13 +74,13 @@ You will need to do two things for a new Trainer:
2. Define what is the iteration. There are 2 ways to define the iteration: 2. Define what is the iteration. There are 2 ways to define the iteration:
1. Set `Trainer.train_op` to a TensorFlow operation. This op will be run by default. 1. Set `Trainer.train_op` to a TensorFlow operation. This op will be run by default.
2. Subclass `Trainer` and override the `run_step()` method. This way you can 2. Subclass `Trainer` and override the `run_step()` method. This way you can
do something more than running an op. do something more than running an op.
Note that trainer has `self.sess` and `self.hooked_sess`: only the hooked Note that trainer has `self.sess` and `self.hooked_sess`: only the hooked
session will trigger the `before_run`/`after_run` callbacks. session will trigger the `before_run`/`after_run` callbacks.
If you need more than one `Session.run` in one steps, special care needs If you need more than one `Session.run` in one steps, special care needs
to be taken to choose which session to use, because many states to be taken to choose which session to use, because many states
(global steps, StagingArea, summaries) are maintained through `before_run`/`after_run`. (global steps, StagingArea, summaries) are maintained through `before_run`/`after_run`.
There are several different [GAN trainers](../../examples/GAN/GAN.py) for reference. There are several different [GAN trainers](../../examples/GAN/GAN.py) for reference.
...@@ -9,7 +9,11 @@ Github is full of deep learning code that "implements" but does not "reproduce" ...@@ -9,7 +9,11 @@ Github is full of deep learning code that "implements" but does not "reproduce"
methods, and you'll not know whether the implementation is actually correct. methods, and you'll not know whether the implementation is actually correct.
See [Unawareness of Deep Learning Mistakes](https://medium.com/@ppwwyyxx/unawareness-of-deep-learning-mistakes-d5b5774da0ba). See [Unawareness of Deep Learning Mistakes](https://medium.com/@ppwwyyxx/unawareness-of-deep-learning-mistakes-d5b5774da0ba).
We refuse toy examples. We refuse low-quality implementations. We refuse toy examples.
Instead of showing tiny CNNs trained on MNIST/Cifar10,
we provide training scripts that reproduce well-known papers.
We refuse low-quality implementations.
Unlike most open source repos which only __implement__ methods, Unlike most open source repos which only __implement__ methods,
[Tensorpack examples](examples) faithfully __reproduce__ [Tensorpack examples](examples) faithfully __reproduce__
experiments and performance in the paper, experiments and performance in the paper,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment