Commit acae3fe5 authored by Yuxin Wu's avatar Yuxin Wu

add missing files

parent 40711389
......@@ -32,12 +32,12 @@ matrix:
env: TF_VERSION=1.6.0 TF_TYPE=release PYPI=true
- os: linux
python: 2.7
env: TF_VERSION=1.head TF_TYPE=nightly
env: TF_TYPE=nightly
- os: linux
python: 3.5
env: TF_VERSION=1.head TF_TYPE=nightly
env: TF_TYPE=nightly
allow_failures:
- env: TF_VERSION=1.head TF_TYPE=nightly
- env: TF_TYPE=nightly
install:
- pip install -U pip # the pip version on travis is too old
......
......@@ -37,7 +37,7 @@ Instead of showing you 10 random networks with random accuracy,
And everything runs on multiple GPUs. Some highlights:
### Vision:
+ [Train ResNet on ImageNet](examples/ResNet)
+ [Train ResNet](examples/ResNet) and [other models](examples/ImageNetModels) on ImageNet.
+ [Train Faster-RCNN / Mask-RCNN on COCO object detection](examples/FasterRCNN)
+ [Generative Adversarial Network(GAN) variants](examples/GAN), including DCGAN, InfoGAN, Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN.
+ [DoReFa-Net: train binary / low-bitwidth CNN on ImageNet](examples/DoReFa-Net)
......
......@@ -4,23 +4,22 @@
Training examples with __reproducible performance__.
__The word "reproduce" should always means reproduce performance__.
Reproducing a method is usually easy, but you don't know whether you've made mistakes, because wrong code will often appear to work.
Reproducing __performance__ results is what really matters, and is something that's hardly seen on github.
With the magic of SGD, wrong code often appears to still work, unless you check its performance number.
See [Unawareness of Deep Learning Mistakes](https://medium.com/@ppwwyyxx/unawareness-of-deep-learning-mistakes-d5b5774da0ba).
## Getting Started:
These examples don't have meaningful performance numbers. They are supposed to be just demos.
+ [An illustrative MNIST example with explanation of the framework](basics/mnist-convnet.py)
+ A tiny [Cifar ConvNet](basics/cifar-convnet.py) and [SVHN ConvNet](basics/svhn-digit-convnet.py)
+ Tensorpack supports any symbolic libraries. See the same MNIST example written with [tf.layers](basics/mnist-tflayers.py), [tf-slim](basics/mnist-tfslim.py), and [with weights visualizations](basics/mnist-visualizations.py)
+ A tiny [Cifar ConvNet](basics/cifar-convnet.py) and [SVHN ConvNet](basics/svhn-digit-convnet.py)
+ If you've used Keras, check out [Keras examples](keras)
+ [A boilerplate file to start with, for your own tasks](boilerplate.py)
## Vision:
| Name | Performance |
| --- | --- |
| Train [ResNet](ResNet) and [ShuffleNet](ImageNetModels) on ImageNet | reproduce paper |
| Train [ResNet](ResNet), [ShuffleNet and other models](ImageNetModels) on ImageNet | reproduce paper |
| [Train Faster-RCNN / Mask-RCNN on COCO](FasterRCNN) | reproduce paper |
| [DoReFa-Net: training binary / low-bitwidth CNN on ImageNet](DoReFa-Net) | reproduce paper |
| [Generative Adversarial Network(GAN) variants](GAN), including DCGAN, InfoGAN, <br/> Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN | visually reproduce |
......
......@@ -15,16 +15,7 @@ if [ $TF_TYPE == "release" ]; then
fi
fi
if [ $TF_TYPE == "nightly" ]; then
if [[ $TRAVIS_PYTHON_VERSION == 2* ]]; then
TF_BINARY_URL=https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON2,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-${TF_VERSION}-cp27-none-linux_x86_64.whl
fi
if [[ $TRAVIS_PYTHON_VERSION == 3.4* ]]; then
TF_BINARY_URL=https://ci.tensorflow.org/view/Nightly/job/nightly-matrix-cpu/TF_BUILD_IS_OPT=OPT,TF_BUILD_IS_PIP=PIP,TF_BUILD_PYTHON_VERSION=PYTHON3,label=cpu-slave/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-${TF_VERSION}-cp34-cp34m-linux_x86_64.whl
fi
if [[ $TRAVIS_PYTHON_VERSION == 3.5* ]]; then
TF_BINARY_URL=https://ci.tensorflow.org/view/Nightly/job/nightly-python35-linux-cpu/lastSuccessfulBuild/artifact/pip_test/whl/tensorflow-${TF_VERSION}-cp35-cp35m-linux_x86_64.whl
fi
TF_BINARY_URL="tf-nightly"
fi
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment