@@ -7,9 +7,9 @@ Neural Network Toolbox on TensorFlow.
...
@@ -7,9 +7,9 @@ Neural Network Toolbox on TensorFlow.
See some [examples](examples) to learn about the framework:
See some [examples](examples) to learn about the framework:
### Vision:
### Vision:
+[DoReFa-Net: train binary / low-bitwidth CNN on ImageNet](examples/DoReFa-Net)
+[Train ResNet on ImageNet / Cifar10 / SVHN](examples/ResNet)
+[Train ResNet on ImageNet / Cifar10 / SVHN](examples/ResNet)
+[Generative Adversarial Network(GAN) variants](examples/GAN), including DCGAN, InfoGAN, Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN.
+[Generative Adversarial Network(GAN) variants](examples/GAN), including DCGAN, InfoGAN, Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN.
+[DoReFa-Net: train binary / low-bitwidth CNN on ImageNet](examples/DoReFa-Net)
+[Fully-convolutional Network for Holistically-Nested Edge Detection(HED)](examples/HED)
+[Fully-convolutional Network for Holistically-Nested Edge Detection(HED)](examples/HED)
+[Spatial Transformer Networks on MNIST addition](examples/SpatialTransformer)
+[Spatial Transformer Networks on MNIST addition](examples/SpatialTransformer)
+[Visualize CNN saliency maps](examples/Saliency)
+[Visualize CNN saliency maps](examples/Saliency)
...
@@ -35,15 +35,15 @@ It's Yet Another TF wrapper, but different in:
...
@@ -35,15 +35,15 @@ It's Yet Another TF wrapper, but different in:
But you can use any other wrappers within tensorpack, such as sonnet/Keras/slim/tflearn/tensorlayer/....
But you can use any other wrappers within tensorpack, such as sonnet/Keras/slim/tflearn/tensorlayer/....
2. Focus on __training speed__.
2. Focus on __training speed__.
+Tensorpack trainer is almost always faster than `feed_dict` based wrappers.
+Speed comes for free with tensorpack.
Even on a tiny CNN example, the training runs [2x faster](https://gist.github.com/ppwwyyxx/8d95da79f8d97036a7d67c2416c851b6) than the equivalent Keras code.
Even on a tiny CNN example, the training runs [1.6x faster](https://gist.github.com/ppwwyyxx/8d95da79f8d97036a7d67c2416c851b6) than the equivalent Keras code.
+ Data-parallel multi-GPU training is off-the-shelf to use. It is as fast as Google's [benchmark code](https://github.com/tensorflow/benchmarks).
+ Data-parallel multi-GPU training is off-the-shelf to use. It is as fast as Google's [official benchmark](https://www.tensorflow.org/performance/benchmarks).
+ Data-parallel distributed training is off-the-shelf to use. It is as slow as Google's [benchmark code](https://github.com/tensorflow/benchmarks).
+ Data-parallel distributed training is off-the-shelf to use. It is as slow as Google's official benchmark.
3. Focus on large datasets.
3. Focus on large datasets.
+ It's painful to read/preprocess data from TF. Use __DataFlow__ to load large datasets (e.g. ImageNet) in __pure Python__ with multi-process prefetch.
+ It's painful to read/preprocess data through TF. Use __DataFlow__ to load large datasets (e.g. ImageNet) in __pure Python__ with multi-process prefetch.
+ DataFlow has a unified interface, so you can compose and reuse them to perform complex preprocessing.
+ DataFlow has a unified interface, so you can compose and reuse them to perform complex preprocessing.
4. Interface of extensible __Callbacks__.
4. Interface of extensible __Callbacks__.
...
@@ -55,6 +55,8 @@ It's Yet Another TF wrapper, but different in:
...
@@ -55,6 +55,8 @@ It's Yet Another TF wrapper, but different in:
+ Run some operations once a while
+ Run some operations once a while
+ Send loss to your phone
+ Send loss to your phone
See [tutorials](http://tensorpack.readthedocs.io/en/latest/tutorial/index.html) to know more about these features.