Commit b5adbb62 authored by Yuxin Wu's avatar Yuxin Wu

update readme

parent 634369d8
...@@ -5,10 +5,11 @@ A neural net training interface based on TensorFlow. ...@@ -5,10 +5,11 @@ A neural net training interface based on TensorFlow.
[![ReadTheDoc](https://readthedocs.org/projects/tensorpack/badge/?version=latest)](http://tensorpack.readthedocs.io/en/latest/index.html) [![ReadTheDoc](https://readthedocs.org/projects/tensorpack/badge/?version=latest)](http://tensorpack.readthedocs.io/en/latest/index.html)
[![Gitter chat](https://badges.gitter.im/gitterHQ/gitter.png)](https://gitter.im/tensorpack/users) [![Gitter chat](https://badges.gitter.im/gitterHQ/gitter.png)](https://gitter.im/tensorpack/users)
See some [examples](examples) to learn about the framework: See some [examples](examples) to learn about the framework. Everything runs on multiple GPUs, because why not?
### Vision: ### Vision:
+ [Multi-GPU training of ResNet on ImageNet](examples/ResNet) + [Train ResNet/SE-ResNet on ImageNet](examples/ResNet)
+ [Train Faster-RCNN on COCO object detection](examples/FasterRCNN)
+ [Generative Adversarial Network(GAN) variants](examples/GAN), including DCGAN, InfoGAN, Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN. + [Generative Adversarial Network(GAN) variants](examples/GAN), including DCGAN, InfoGAN, Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN.
+ [DoReFa-Net: train binary / low-bitwidth CNN on ImageNet](examples/DoReFa-Net) + [DoReFa-Net: train binary / low-bitwidth CNN on ImageNet](examples/DoReFa-Net)
+ [Fully-convolutional Network for Holistically-Nested Edge Detection(HED)](examples/HED) + [Fully-convolutional Network for Holistically-Nested Edge Detection(HED)](examples/HED)
......
...@@ -252,6 +252,7 @@ if __name__ == '__main__': ...@@ -252,6 +252,7 @@ if __name__ == '__main__':
if args.evaluate is not None: if args.evaluate is not None:
assert args.evaluate.endswith('.json') assert args.evaluate.endswith('.json')
assert args.load assert args.load
os.environ['TF_CUDNN_USE_AUTOTUNE'] = '0'
offline_evaluate(args.load, args.evaluate) offline_evaluate(args.load, args.evaluate)
sys.exit() sys.exit()
if args.predict is not None: if args.predict is not None:
......
...@@ -10,17 +10,18 @@ Training examples with __reproducible__ and meaningful performance. ...@@ -10,17 +10,18 @@ Training examples with __reproducible__ and meaningful performance.
## Vision: ## Vision:
+ [A tiny SVHN ConvNet with 97.8% accuracy](svhn-digit-convnet.py) + [A tiny SVHN ConvNet with 97.8% accuracy](svhn-digit-convnet.py)
+ Multi-GPU training of [ResNet](ResNet) and [ShuffleNet](ShuffleNet) on ImageNet + Train [ResNet](ResNet) and [ShuffleNet](ShuffleNet) on ImageNet
+ [Train ResNet50-Faster-RCNN on COCO](FasterRCNN)
+ [DoReFa-Net: training binary / low-bitwidth CNN on ImageNet](DoReFa-Net) + [DoReFa-Net: training binary / low-bitwidth CNN on ImageNet](DoReFa-Net)
+ [Generative Adversarial Network(GAN) variants](GAN), including DCGAN, InfoGAN, Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN. + [Generative Adversarial Network(GAN) variants](GAN), including DCGAN, InfoGAN, Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN
+ [Inception-BN and InceptionV3](Inception) + [Inception-BN and InceptionV3](Inception)
+ [Fully-convolutional Network for Holistically-Nested Edge Detection(HED)](HED) + [Fully-convolutional Network for Holistically-Nested Edge Detection(HED)](HED)
+ [Spatial Transformer Networks on MNIST addition](SpatialTransformer) + [Spatial Transformer Networks on MNIST addition](SpatialTransformer)
+ [Visualize CNN saliency maps](Saliency) + [Visualize CNN saliency maps](Saliency)
+ [Similarity learning on MNIST](SimilarityLearning) + [Similarity learning on MNIST](SimilarityLearning)
+ Learn steering filters with [Dynamic Filter Networks](DynamicFilterNetwork) + Learn steering filters with [Dynamic Filter Networks](DynamicFilterNetwork)
+ Load a pre-trained [AlexNet](load-alexnet.py) or [VGG16](load-vgg16.py) model. + Load a pre-trained [AlexNet](load-alexnet.py) or [VGG16](load-vgg16.py) model
+ Load a pre-trained [Convolutional Pose Machines](ConvolutionalPoseMachines/). + Load a pre-trained [Convolutional Pose Machines](ConvolutionalPoseMachines/)
## Reinforcement Learning: ## Reinforcement Learning:
+ [Deep Q-Network(DQN) variants on Atari games](DeepQNetwork), including DQN, DoubleDQN, DuelingDQN. + [Deep Q-Network(DQN) variants on Atari games](DeepQNetwork), including DQN, DoubleDQN, DuelingDQN.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment