+[DQN variants on Atari games](examples/Atari2600)
+[DQN variants on Atari games](examples/Atari2600)
+[Asynchronous Advantage Actor-Critic(A3C) with demos on OpenAI Gym](examples/OpenAIGym)
+[Asynchronous Advantage Actor-Critic(A3C) with demos on OpenAI Gym](examples/OpenAIGym)
+[char-rnn language model](examples/char-rnn)
+[char-rnn language model](examples/char-rnn)
...
@@ -18,17 +18,17 @@ You can actually train them and reproduce the performance... not just to see how
...
@@ -18,17 +18,17 @@ You can actually train them and reproduce the performance... not just to see how
Describe your training task with three components:
Describe your training task with three components:
1.Model, or graph. `models/` has some scoped abstraction of common models, but you can simply use
1.__Model__, or graph. `models/` has some scoped abstraction of common models, but you can simply use
any symbolic functions available in tensorflow, or most functions in slim/tflearn/tensorlayer.
any symbolic functions available in tensorflow, or most functions in slim/tflearn/tensorlayer.
`LinearWrap` and `argscope` makes large models look simpler.
`LinearWrap` and `argscope` makes large models look simpler ([vgg example](https://github.com/ppwwyyxx/tensorpack/blob/master/examples/load-vgg16.py)).
2.Data. tensorpack allows and encourages complex data processing.
2.__DataFlow__. tensorpack allows and encourages complex data processing.
+ All data producer has an unified `generator` interface, allowing them to be composed to perform complex preprocessing.
+ All data producer has an unified `generator` interface, allowing them to be composed to perform complex preprocessing.
+ Use Python to easily handle any data format, yet still keep a good training speed thanks to multiprocess prefetch & TF Queue prefetch.
+ Use Python to easily handle any data format, yet still keep a good training speed thanks to multiprocess prefetch & TF Queue prefetch.
For example, InceptionV3 can run in the same speed as the official code which reads data using TF operators.
For example, InceptionV3 can run in the same speed as the official code which reads data using TF operators.
3.Callbacks, including everything you want to do apart from the training iterations, such as:
3.__Callbacks__, including everything you want to do apart from the training iterations, such as:
+ Change hyperparameters during training
+ Change hyperparameters during training
+ Print some variables of interest
+ Print some variables of interest
+ Run inference on a test dataset
+ Run inference on a test dataset
...
@@ -39,6 +39,8 @@ With the above components defined, tensorpack trainer will run the training iter
...
@@ -39,6 +39,8 @@ With the above components defined, tensorpack trainer will run the training iter
Multi-GPU training is off-the-shelf by simply switching the trainer.
Multi-GPU training is off-the-shelf by simply switching the trainer.
You can also define your own trainer for non-standard training (e.g. GAN).
You can also define your own trainer for non-standard training (e.g. GAN).
The components are designed to be independent. You can use only Model or DataFlow in your project.