Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
ffa8099d
Commit
ffa8099d
authored
Jun 27, 2017
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
docs update
parent
762521e1
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
19 additions
and
15 deletions
+19
-15
docs/tutorial/callback.md
docs/tutorial/callback.md
+7
-4
docs/tutorial/extend/callback.md
docs/tutorial/extend/callback.md
+10
-10
docs/tutorial/index.rst
docs/tutorial/index.rst
+2
-1
No files found.
docs/tutorial/callback.md
View file @
ffa8099d
...
@@ -2,13 +2,13 @@
...
@@ -2,13 +2,13 @@
# Callbacks
# Callbacks
Apart from the actual training iterations that minimize the cost,
Apart from the actual training iterations that minimize the cost,
you almost surely would like to do something else
during training
.
you almost surely would like to do something else.
Callbacks are such an interface to describe what to do besides the
Callbacks are such an interface to describe what to do besides the
training iterations.
training iterations.
There are several places where you might want to do something else:
There are several places where you might want to do something else:
*
Before the training has started (e.g. initialize the saver)
*
Before the training has started (e.g. initialize the saver
, dump the graph
)
*
Along with each training iteration (e.g. run some other operations in the graph)
*
Along with each training iteration (e.g. run some other operations in the graph)
*
Between training iterations (e.g. update the progress bar, update hyperparameters)
*
Between training iterations (e.g. update the progress bar, update hyperparameters)
*
Between epochs (e.g. save the model, run some validation)
*
Between epochs (e.g. save the model, run some validation)
...
@@ -16,8 +16,8 @@ There are several places where you might want to do something else:
...
@@ -16,8 +16,8 @@ There are several places where you might want to do something else:
We found people traditionally tend to write the training loop together with these extra features.
We found people traditionally tend to write the training loop together with these extra features.
This makes the loop lengthy, and the code for the same feature probably get separated.
This makes the loop lengthy, and the code for the same feature probably get separated.
By writing callbacks to implement what
you want
to do at each place, tensorpack trainers
By writing callbacks to implement what to do at each place, tensorpack trainers
will call the
m
at the proper time.
will call the
callbacks
at the proper time.
Therefore the code can be reused with one single line, as long as you are using tensorpack trainers.
Therefore the code can be reused with one single line, as long as you are using tensorpack trainers.
For example, these are the callbacks I used when training a ResNet:
For example, these are the callbacks I used when training a ResNet:
...
@@ -69,3 +69,6 @@ TrainConfig(
...
@@ -69,3 +69,6 @@ TrainConfig(
Notice that callbacks cover every detail of training, ranging from graph operations to the progress bar.
Notice that callbacks cover every detail of training, ranging from graph operations to the progress bar.
This means you can customize every part of the training to your preference, e.g. display something
This means you can customize every part of the training to your preference, e.g. display something
different in the progress bar, evaluating part of the summaries at a different frequency, etc.
different in the progress bar, evaluating part of the summaries at a different frequency, etc.
See
[
Write a callback
](
http://tensorpack.readthedocs.io/en/latest/tutorial/extend/callback.html
)
on how to implement a callback.
docs/tutorial/extend/callback.md
View file @
ffa8099d
## Write a callback
## Write a callback
The places where each method gets called is demonstrated in this snippet:
The places where each
callback
method gets called is demonstrated in this snippet:
```
python
```
python
def
main_loop
():
def
main_loop
():
#
create graph for the model
#
... create graph for the model ...
callbacks
.
setup_graph
()
callbacks
.
setup_graph
()
# create session, initialize session, finalize graph ...
#
...
create session, initialize session, finalize graph ...
# start training:
# start training:
callbacks
.
before_train
()
callbacks
.
before_train
()
for
epoch
in
range
(
epoch_start
,
epoch_end
):
for
epoch
in
range
(
epoch_start
,
epoch_end
):
callbacks
.
before_epoch
()
callbacks
.
before_epoch
()
for
step
in
range
(
steps_per_epoch
):
for
step
in
range
(
steps_per_epoch
):
run_step
()
# callbacks.{before,after}_run are hooked with session
run_
one_
step
()
# callbacks.{before,after}_run are hooked with session
callbacks
.
trigger_step
()
callbacks
.
trigger_step
()
callbacks
.
after_epoch
()
callbacks
.
after_epoch
()
callbacks
.
trigger_epoch
()
callbacks
.
trigger_epoch
()
callbacks
.
after_train
()
callbacks
.
after_train
()
```
```
## Explain the callback methods
##
#
Explain the callback methods
You can over
writ
e any of the following methods to define a new callback:
You can over
rid
e any of the following methods to define a new callback:
*
`_setup_graph(self)`
*
`_setup_graph(self)`
...
@@ -33,7 +33,7 @@ to access those already defined in the training tower. Or use
...
@@ -33,7 +33,7 @@ to access those already defined in the training tower. Or use
to create a callable evaluation function in the predict tower.
to create a callable evaluation function in the predict tower.
This method is to separate between "define" and "run", and also to avoid the common mistake to create ops inside
This method is to separate between "define" and "run", and also to avoid the common mistake to create ops inside
loops
, a
ll changes to the graph should be made in this method.
loops
. A
ll changes to the graph should be made in this method.
*
`_before_train(self)`
*
`_before_train(self)`
...
@@ -52,7 +52,7 @@ Otherwise, `_trigger_epoch` should be enough.
...
@@ -52,7 +52,7 @@ Otherwise, `_trigger_epoch` should be enough.
This two are the equivlent of
[
tf.train.SessionRunHook
](
https://www.tensorflow.org/api_docs/python/tf/train/SessionRunHook
)
.
This two are the equivlent of
[
tf.train.SessionRunHook
](
https://www.tensorflow.org/api_docs/python/tf/train/SessionRunHook
)
.
Please refer to TensorFlow documentation for detailed API.
Please refer to TensorFlow documentation for detailed API.
They are used to run extra ops / eval extra tensors / feed extra values __along with__ the actual training iteration.
They are used to run extra ops / eval extra tensors / feed extra values __along with__ the actual training iteration
s
.
Note the difference between running __along with__ an iteration and running after an iteration.
Note the difference between running __along with__ an iteration and running after an iteration.
When you write
When you write
...
@@ -68,7 +68,7 @@ which is what you would get if you run the op in `_trigger_step`.
...
@@ -68,7 +68,7 @@ which is what you would get if you run the op in `_trigger_step`.
*
`_trigger_step(self)`
*
`_trigger_step(self)`
Do something (
including running ops
) after each step has finished.
Do something (
e.g. running ops, print stuff
) after each step has finished.
Be careful to only do light work here because it could affect training speed.
Be careful to only do light work here because it could affect training speed.
*
`_trigger_epoch(self)`
*
`_trigger_epoch(self)`
...
@@ -81,7 +81,7 @@ By default will get called by `_trigger_epoch`,
...
@@ -81,7 +81,7 @@ By default will get called by `_trigger_epoch`,
but you can customize the scheduling of this callback by
but you can customize the scheduling of this callback by
`PeriodicTrigger`
, to let this method run every k steps or every k epochs.
`PeriodicTrigger`
, to let this method run every k steps or every k epochs.
## What you can do in the callback
##
#
What you can do in the callback
*
Access tensors / ops in either training / inference mode (need to create them in
`_setup_graph`
).
*
Access tensors / ops in either training / inference mode (need to create them in
`_setup_graph`
).
*
Write stuff to the monitor backend, by
`self.trainer.monitors.put_xxx`
.
*
Write stuff to the monitor backend, by
`self.trainer.monitors.put_xxx`
.
...
...
docs/tutorial/index.rst
View file @
ffa8099d
...
@@ -13,7 +13,8 @@ A High Level Glance
...
@@ -13,7 +13,8 @@ A High Level Glance
processing
.
processing
.
*
You
can
use
any
TF
-
based
symbolic
function
library
to
define
a
model
in
tensorpack
.
*
You
can
use
any
TF
-
based
symbolic
function
library
to
define
a
model
in
tensorpack
.
:
doc
:`
model
`
introduces
where
and
how
you
define
the
model
for
tensorpack
trainers
to
use
,
And
``
ModelDesc
``
is
an
interface
to
connect
symbolic
graph
to
tensorpack
trainers
.
:
doc
:`
model
`
introduces
where
and
how
you
define
the
graph
for
tensorpack
trainers
to
use
,
and
how
you
can
benefit
from
the
small
symbolic
function
library
in
tensorpack
.
and
how
you
can
benefit
from
the
small
symbolic
function
library
in
tensorpack
.
Both
DataFlow
and
models
can
be
used
outside
tensorpack
,
as
just
a
data
processing
library
and
a
symbolic
Both
DataFlow
and
models
can
be
used
outside
tensorpack
,
as
just
a
data
processing
library
and
a
symbolic
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment