Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
f2ae63d2
Commit
f2ae63d2
authored
Jan 30, 2017
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update docs[]
parent
deaabc90
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
69 additions
and
11 deletions
+69
-11
docs/user/callbacks.md
docs/user/callbacks.md
+56
-0
docs/user/trainer.md
docs/user/trainer.md
+6
-6
docs/user/tutorials.rst
docs/user/tutorials.rst
+2
-0
tensorpack/callbacks/summary.py
tensorpack/callbacks/summary.py
+2
-2
tensorpack/train/config.py
tensorpack/train/config.py
+3
-3
No files found.
docs/user/callbacks.md
0 → 100644
View file @
f2ae63d2
# Callbacks
Apart from the actual training iterations that minimizes the cost,
you almost surely would like to do something else during training.
Callbacks are such an interface to describe what to do besides the
training iteration defined by the trainers.
There are several places where you might want to do something else:
*
Before the training has started (e.g. initialize the saver)
*
Along with each training iteration (e.g. run some other operations in the graph)
*
Between training iterations (e.g. update the progress bar, update hyperparameters)
*
Between epochs (e.g. save the model, run some validation)
*
After the training (e.g. send the model somewhere, send a message to your phone)
By writing callbacks to implement these tasks, you can reuse them as long as
you're using tensorpack trainers. For example, these are the callbacks I used when training
a ResNet:
```
python
TrainConfig
(
# ...
callbacks
=
[
# save the model every epoch
ModelSaver
(),
# run inference on another Dataflow every epoch, compute top1/top5 classification error and save them
InferenceRunner
(
dataset_val
,
[
ClassificationError
(
'wrong-top1'
,
'val-error-top1'
),
ClassificationError
(
'wrong-top5'
,
'val-error-top5'
)]),
# schedule the learning rate based on epoch number
ScheduledHyperParamSetter
(
'learning_rate'
,
[(
30
,
1e-2
),
(
60
,
1e-3
),
(
85
,
1e-4
),
(
95
,
1e-5
)]),
# allow manually setting the learning rate during training
HumanHyperParamSetter
(
'learning_rate'
),
# send validation error to my phone through pushbullet
SendStat
(
'curl -u your_id_xxx: https://api.pushbullet.com/v2/pushes
\\
-d type=note -d title="validation error"
\\
-d body={val-error-top1} > /dev/null 2>&1'
,
'val-error-top1'
)
],
extra_callbacks
=
[
# these are already enabled by default
# maintain and summarize moving average of some tensors (e.g. training loss, training error)
MovingAverageSummary
(),
# draw a nice progress bar
ProgressBar
(),
# print all the statistics I've created and scalar tensors I've summarized
StatPrinter
(),
]
)
```
## Write a callback
TODO
docs/user/trainer.md
View file @
f2ae63d2
...
@@ -3,18 +3,18 @@
...
@@ -3,18 +3,18 @@
## Trainer
## Trainer
Training is
just "running something again and again"
.
Training is
basically
**running something again and again**
.
Tensorpack base trainer implements the logic of
*running the iteration*
,
Tensorpack base trainer implements the logic of
*running the iteration*
,
and other trainers implement what the iteration is.
and other trainers implement what the iteration is.
Most neural network training tasks are single-cost optimization.
Most neural network training tasks are single-cost optimization.
Tensorpack provides some trainer implementations for such tasks.
Tensorpack provides some trainer implementations for such tasks.
These trainers will by default
opt
imizes
`ModelDesc.cost`
,
These trainers will by default
min
imizes
`ModelDesc.cost`
,
therefore you can use these trainers as long as you set
`self.cost`
in
`ModelDesc._build_graph()`
.
therefore you can use these trainers as long as you set
`self.cost`
in
`ModelDesc._build_graph()`
.
The existing trainers were implemented with a TensorFlow queue to prefetch and buffer
The existing trainers were implemented with a TensorFlow queue to prefetch and buffer
training data, which is significantly faster than
training data, which is significantly faster than
a naive
`sess.run(..., feed_dict={...})`
you might use
.
a naive
`sess.run(..., feed_dict={...})`
.
There are also multi-GPU trainers which includes the logic of data-parallel multi-GPU training,
There are also multi-GPU trainers which includes the logic of data-parallel multi-GPU training,
with either synchronous update or asynchronous update. You can enable multi-GPU training
with either synchronous update or asynchronous update. You can enable multi-GPU training
by just changing one line.
by just changing one line.
...
@@ -47,11 +47,11 @@ Some trainer takes data from a TensorFlow reading pipeline instead of a Dataflow
...
@@ -47,11 +47,11 @@ Some trainer takes data from a TensorFlow reading pipeline instead of a Dataflow
(
[
PTB example
](
../examples/PennTreebank
)
).
(
[
PTB example
](
../examples/PennTreebank
)
).
##
Develop
trainers
##
Write
trainers
The existing trainers should be enough for single-cost optimization tasks. If you
The existing trainers should be enough for single-cost optimization tasks. If you
want to do something inside the trainer, consider
ing
writing it as a callback, or
want to do something inside the trainer, consider writing it as a callback, or
submit
an issue to see if there is a better solution than creating new trainers.
write
an issue to see if there is a better solution than creating new trainers.
For other tasks, you might need a new trainer.
For other tasks, you might need a new trainer.
The
[
GAN trainer
](
../examples/GAN/GAN.py
)
is one example of how to implement
The
[
GAN trainer
](
../examples/GAN/GAN.py
)
is one example of how to implement
...
...
docs/user/tutorials.rst
View file @
f2ae63d2
...
@@ -10,3 +10,5 @@ Test.
...
@@ -10,3 +10,5 @@ Test.
glance
glance
dataflow
dataflow
models
models
trainer
callbacks
tensorpack/callbacks/summary.py
View file @
f2ae63d2
...
@@ -10,10 +10,10 @@ from ..utils.naming import MOVING_SUMMARY_VARS_KEY
...
@@ -10,10 +10,10 @@ from ..utils.naming import MOVING_SUMMARY_VARS_KEY
from
..tfutils.common
import
get_global_step_var
from
..tfutils.common
import
get_global_step_var
from
.base
import
Callback
from
.base
import
Callback
__all__
=
[
'
SummaryMovingAverage
'
]
__all__
=
[
'
MovingAverageSummary
'
]
class
SummaryMovingAverage
(
Callback
):
class
MovingAverageSummary
(
Callback
):
""" Maintain the moving average of the tensors
""" Maintain the moving average of the tensors
in every step, and summarize them. Enabled by default.
in every step, and summarize them. Enabled by default.
"""
"""
...
...
tensorpack/train/config.py
View file @
f2ae63d2
...
@@ -5,7 +5,7 @@
...
@@ -5,7 +5,7 @@
import
tensorflow
as
tf
import
tensorflow
as
tf
from
..callbacks
import
(
from
..callbacks
import
(
Callbacks
,
SummaryMovingAverage
,
Callbacks
,
MovingAverageSummary
,
StatPrinter
,
ProgressBar
,
StatPrinter
,
ProgressBar
,
MaintainStepCounter
)
MaintainStepCounter
)
from
..dataflow.base
import
DataFlow
from
..dataflow.base
import
DataFlow
...
@@ -41,7 +41,7 @@ class TrainConfig(object):
...
@@ -41,7 +41,7 @@ class TrainConfig(object):
callbacks (list): a list of :class:`Callback` to perform during training.
callbacks (list): a list of :class:`Callback` to perform during training.
extra_callbacks (list): the same as ``callbacks``. This argument
extra_callbacks (list): the same as ``callbacks``. This argument
is only used to provide the defaults. The defaults are
is only used to provide the defaults. The defaults are
``[
SummaryMovingAverage
(), ProgressBar(), StatPrinter()]``. The list of
``[
MovingAverageSummary
(), ProgressBar(), StatPrinter()]``. The list of
callbacks that will be used in the end are ``callbacks + extra_callbacks``.
callbacks that will be used in the end are ``callbacks + extra_callbacks``.
Note that ``StatPrinter`` should be the last one to be able to print
Note that ``StatPrinter`` should be the last one to be able to print
stats generated by other callbacks.
stats generated by other callbacks.
...
@@ -86,7 +86,7 @@ class TrainConfig(object):
...
@@ -86,7 +86,7 @@ class TrainConfig(object):
assert_type
(
callbacks
,
list
)
assert_type
(
callbacks
,
list
)
if
extra_callbacks
is
None
:
if
extra_callbacks
is
None
:
extra_callbacks
=
[
extra_callbacks
=
[
SummaryMovingAverage
(),
MovingAverageSummary
(),
ProgressBar
(),
ProgressBar
(),
StatPrinter
()]
StatPrinter
()]
self
.
callbacks
=
[
MaintainStepCounter
()]
+
callbacks
+
extra_callbacks
self
.
callbacks
=
[
MaintainStepCounter
()]
+
callbacks
+
extra_callbacks
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment