Training examples with __reproducible performance__.
__The word "reproduce" should always means reproduce performance__.
Reproducing a method is usually easy, but you don't know whether you've made mistakes, because wrong code will often appear to work.
Reproducing __performance__ results is what really matters, and is something that's hardly seen on github.
With the magic of SGD, wrong code often appears to still work, unless you check its performance number.
See [Unawareness of Deep Learning Mistakes](https://medium.com/@ppwwyyxx/unawareness-of-deep-learning-mistakes-d5b5774da0ba).
## Getting Started:
These examples don't have meaningful performance numbers. They are supposed to be just demos.
+[An illustrative MNIST example with explanation of the framework](basics/mnist-convnet.py)
+ A tiny [Cifar ConvNet](basics/cifar-convnet.py) and [SVHN ConvNet](basics/svhn-digit-convnet.py)
+ Tensorpack supports any symbolic libraries. See the same MNIST example written with [tf.layers](basics/mnist-tflayers.py), [tf-slim](basics/mnist-tfslim.py), and [with weights visualizations](basics/mnist-visualizations.py)
+ A tiny [Cifar ConvNet](basics/cifar-convnet.py) and [SVHN ConvNet](basics/svhn-digit-convnet.py)
+ If you've used Keras, check out [Keras examples](keras)
+[A boilerplate file to start with, for your own tasks](boilerplate.py)
## Vision:
| Name | Performance |
| --- | --- |
| Train [ResNet](ResNet) and [ShuffleNet](ImageNetModels) on ImageNet | reproduce paper |
| Train [ResNet](ResNet), [ShuffleNet and other models](ImageNetModels) on ImageNet | reproduce paper |
| [Train Faster-RCNN / Mask-RCNN on COCO](FasterRCNN) | reproduce paper |
| [DoReFa-Net: training binary / low-bitwidth CNN on ImageNet](DoReFa-Net) | reproduce paper |