Commit 2c429763 authored by Yuxin Wu's avatar Yuxin Wu

update examples readme

parent cc0f07a9
...@@ -5,37 +5,47 @@ Training examples with __reproducible__ and meaningful performance. ...@@ -5,37 +5,47 @@ Training examples with __reproducible__ and meaningful performance.
## Getting Started: ## Getting Started:
+ [An illustrative mnist example with explanation of the framework](mnist-convnet.py) + [An illustrative mnist example with explanation of the framework](mnist-convnet.py)
+ The same mnist example using [tf-slim](mnist-tfslim.py), [Keras](mnist-keras.py), and [with weights visualizations](mnist-visualizations.py) + The same mnist example using [tf-slim](mnist-tfslim.py), [Keras layers](mnist-keras.py), [Higher-level Keras](mnist-keras-v2.py) and [with weights visualizations](mnist-visualizations.py)
+ [A tiny SVHN ConvNet with 97.8% accuracy](svhn-digit-convnet.py)
+ [A boilerplate file to start with, for your own tasks](boilerplate.py) + [A boilerplate file to start with, for your own tasks](boilerplate.py)
## Vision: ## Vision:
+ [A tiny SVHN ConvNet with 97.8% accuracy](svhn-digit-convnet.py) | Name | Performance |
+ Train [ResNet](ResNet) and [ShuffleNet](ShuffleNet) on ImageNet | --- | --- |
+ [Train ResNet50-Faster-RCNN on COCO](FasterRCNN) | Train [ResNet](ResNet) and [ShuffleNet](ShuffleNet) on ImageNet | reproduce paper |
+ [DoReFa-Net: training binary / low-bitwidth CNN on ImageNet](DoReFa-Net) | [Train ResNet50-Faster-RCNN on COCO](FasterRCNN) | reproduce paper |
+ [Generative Adversarial Network(GAN) variants](GAN), including DCGAN, InfoGAN, Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN | [DoReFa-Net: training binary / low-bitwidth CNN on ImageNet](DoReFa-Net) | reproduce paper |
+ [Inception-BN and InceptionV3](Inception) | [Generative Adversarial Network(GAN) variants](GAN), including DCGAN, InfoGAN, <br/> Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN | visually reproduce |
+ [Fully-convolutional Network for Holistically-Nested Edge Detection(HED)](HED) | [Inception-BN and InceptionV3](Inception) | reproduce reference code |
+ [Spatial Transformer Networks on MNIST addition](SpatialTransformer) | [Fully-convolutional Network for Holistically-Nested Edge Detection(HED)](HED) | visually reproduce |
+ [Visualize CNN saliency maps](Saliency) | [Spatial Transformer Networks on MNIST addition](SpatialTransformer) | reproduce paper |
+ [Similarity learning on MNIST](SimilarityLearning) | [Visualize CNN saliency maps](Saliency) | visually reproduce |
+ Learn steering filters with [Dynamic Filter Networks](DynamicFilterNetwork) | [Similarity learning on MNIST](SimilarityLearning) | |
+ Load a pre-trained [AlexNet](load-alexnet.py) or [VGG16](load-vgg16.py) model | Learn steering filters with [Dynamic Filter Networks](DynamicFilterNetwork) | visually reproduce |
+ Load a pre-trained [Convolutional Pose Machines](ConvolutionalPoseMachines/) | Load a pre-trained [AlexNet](load-alexnet.py) or [VGG16](load-vgg16.py) model | |
| Load a pre-trained [Convolutional Pose Machines](ConvolutionalPoseMachines/) | |
## Reinforcement Learning: ## Reinforcement Learning:
+ [Deep Q-Network(DQN) variants on Atari games](DeepQNetwork), including DQN, DoubleDQN, DuelingDQN. | Name | Performance |
+ [Asynchronous Advantage Actor-Critic(A3C) with demos on OpenAI Gym](A3C-Gym) | --- | --- |
| [Deep Q-Network(DQN) variants on Atari games](DeepQNetwork), including DQN, DoubleDQN, DuelingDQN. | reproduce paper |
| [Asynchronous Advantage Actor-Critic(A3C) with demos on OpenAI Gym](A3C-Gym) | reproduce paper |
## Speech / NLP: ## Speech / NLP:
+ [LSTM-CTC for speech recognition](CTC-TIMIT) | Name | Performance |
+ [char-rnn for fun](Char-RNN) | --- | --- |
+ [LSTM language model on PennTreebank](PennTreebank) | [LSTM-CTC for speech recognition](CTC-TIMIT) | reproduce paper |
| [char-rnn for fun](Char-RNN) | fun |
| [LSTM language model on PennTreebank](PennTreebank) | reproduce reference code |
Note to contributors: #### Note to contributors:
Example needs to satisfy one of the following: Example needs to satisfy one of the following:
+ Reproduce performance of a published or well-known paper. + Reproduce performance of a published or well-known paper.
+ Get state-of-the-art performance on some task. + Get state-of-the-art performance on some task.
+ Illustrate a new way of using the library that is currently not covered. + Illustrate a new way of using the library that is currently not covered.
__Performance is important__. Usually deep learning code is easy to write,
but hard to know the correctness -- thanks to SGD things will usually still converge when you've made mistakes.
Without a setting and performance comparable to someone else, you don't know if an implementation is correct or not.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment