To run distributed training, set `TRAINER=horovod` and refer to [HorovodTrainer docs](http://tensorpack.readthedocs.io/modules/train.html#tensorpack.train.HorovodTrainer).
Alternatively, use `TRAINER=horovod` which supports distributed training as well, but less straightforward to run.
Refer to [HorovodTrainer docs](http://tensorpack.readthedocs.io/modules/train.html#tensorpack.train.HorovodTrainer) for details.
All options can be changed by either the command line or the `config.py` file (recommended).
All options can be changed by either the command line or the `config.py` file (recommended).
Some reasonable configurations are listed in the table below.
Some reasonable configurations are listed in the table below.
...
@@ -74,7 +79,7 @@ To evaluate the performance of a model on COCO:
...
@@ -74,7 +79,7 @@ To evaluate the performance of a model on COCO:
```
```
Several trained models can be downloaded in the table below. Evaluation and
Several trained models can be downloaded in the table below. Evaluation and
prediction will need to be run with the corresponding configs used in training.
prediction have to be run with the corresponding configs used in training.
| [DoReFa-Net: training binary / low-bitwidth CNN on ImageNet](DoReFa-Net) | reproduce paper |
| [DoReFa-Net: training binary / low-bitwidth CNN on ImageNet](DoReFa-Net) | reproduce 4 papers |
| [Adversarial training with state-of-the-art robustness](https://github.com/facebookresearch/ImageNet-Adversarial-Training) | official code for the paper |
To reproduce training or evaluation in the above table,
first decompress ImageNet data into [this structure](http://tensorpack.readthedocs.io/modules/dataflow.dataset.html#tensorpack.dataflow.dataset.ILSVRC12), then:
first decompress ImageNet data into [this structure](http://tensorpack.readthedocs.io/modules/dataflow.dataset.html#tensorpack.dataflow.dataset.ILSVRC12), then: