@@ -38,7 +38,9 @@ It's Yet Another TF wrapper, but different in:
+ Tensorpack trainer is almost always faster than `feed_dict` based wrappers.
Even on a tiny CNN example, the training runs [2x faster](https://gist.github.com/ppwwyyxx/8d95da79f8d97036a7d67c2416c851b6) than the equivalent Keras code.
+ Data-Parallel Multi-GPU training is off-the-shelf to use. It is as fast as Google's [benchmark code](https://github.com/tensorflow/benchmarks).
+ Data-parallel multi-GPU training is off-the-shelf to use. It is as fast as Google's [benchmark code](https://github.com/tensorflow/benchmarks).
+ Data-parallel distributed training is off-the-shelf to use. It is as slow as Google's [benchmark code](https://github.com/tensorflow/benchmarks).
3. Focus on large datasets.
+ It's painful to read/preprocess data from TF. Use __DataFlow__ to efficiently process large datasets such as ImageNet in __pure Python__.
@@ -74,3 +80,12 @@ Do something after each epoch has finished. Will call `self.trigger()` by defaul
By default will get called by `_trigger_epoch`,
but you can customize the scheduling of this callback by
`PeriodicTrigger`, to let this method run every k steps or every k epochs.
## What you can do in the callback
* Access tensors / ops in either training / inference mode (need to create them in `_setup_graph`).
* Write stuff to the monitor backend, by `self.trainer.monitors.put_xxx`.
The monitors might direct your events to TensorFlow events file, JSON file, stdout, etc.
You can get history monitor data as well. See the docs for [Monitors](http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.Monitors)
* Access the current status of training, such as `epoch_num`, `global_step`. See [here](http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.Callback)
* Anything else that can be done with plain python.