Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
bedec8cd
Commit
bedec8cd
authored
Nov 29, 2017
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update docs formatting
parent
f55d81f2
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
45 additions
and
45 deletions
+45
-45
docs/tutorial/extend/callback.md
docs/tutorial/extend/callback.md
+41
-41
docs/tutorial/inference.md
docs/tutorial/inference.md
+4
-4
No files found.
docs/tutorial/extend/callback.md
View file @
bedec8cd
...
...
@@ -29,71 +29,71 @@ You can overwrite any of the following methods to define a new callback:
*
`_setup_graph(self)`
Create any ops / tensors in the graph which you might need to use in the callback.
This method is to separate between "define" and "run", and also to
avoid the common mistake to create ops inside
loops. All changes to the graph should be made in this method.
Create any ops / tensors in the graph which you might need to use in the callback.
This method is to separate between "define" and "run", and also to
avoid the common mistake to create ops inside
loops. All changes to the graph should be made in this method.
To access ops which are already defined,
you can use TF methods such as
[
`graph.get_tensor_by_name`
](
https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name
)
.
If you're using a
`TowerTrainer`
instance, more tools are available:
To access ops which are already defined,
you can use TF methods such as
[
`graph.get_tensor_by_name`
](
https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name
)
.
If you're using a
`TowerTrainer`
instance, more tools are available:
*
Use
`self.trainer.tower_func.towers`
to access the
-
Use
`self.trainer.tower_func.towers`
to access the
[
tower handles
](
../modules/tfutils.html#tensorpack.tfutils.tower.TowerTensorHandles
)
,
and therefore the tensors in each tower.
*
[
self.get_tensors_maybe_in_tower()
](
../modules/callbacks.html#tensorpack.callbacks.Callback.get_tensors_maybe_in_tower
)
-
[
self.get_tensors_maybe_in_tower()
](
../modules/callbacks.html#tensorpack.callbacks.Callback.get_tensors_maybe_in_tower
)
is a helper function to access tensors in the first training tower.
*
[
self.trainer.get_predictor()
](
../modules/train.html#tensorpack.train.TowerTrainer.get_predictor
)
-
[
self.trainer.get_predictor()
](
../modules/train.html#tensorpack.train.TowerTrainer.get_predictor
)
is a helper function to create a callable under inference mode.
*
`_before_train(self)`
Can be used to run some manual initialization of variables, or start some services for the training.
Can be used to run some manual initialization of variables, or start some services for the training.
*
`_after_train(self)`
Usually some finalization work.
Usually some finalization work.
*
`_before_epoch(self)`
,
`_after_epoch(self)`
Use them __only__ when you really need something to happen __immediately__ before/after an epoch.
Otherwise,
`_trigger_epoch`
should be enough.
Use them __only__ when you really need something to happen __immediately__ before/after an epoch.
Otherwise,
`_trigger_epoch`
should be enough.
*
`_before_run(self, ctx)`
,
`_after_run(self, ctx, values)`
These are the equivalence of
[
tf.train.SessionRunHook
](
https://www.tensorflow.org/api_docs/python/tf/train/SessionRunHook
)
.
Please refer to TensorFlow documentation for detailed API.
They are used to run extra ops / eval extra tensors / feed extra values __along with__ the actual training iterations.
These are the equivalence of
[
tf.train.SessionRunHook
](
https://www.tensorflow.org/api_docs/python/tf/train/SessionRunHook
)
.
Please refer to TensorFlow documentation for detailed API.
They are used to run extra ops / eval extra tensors / feed extra values __along with__ the actual training iterations.
Note the difference between running __along with__ an iteration and running after an iteration.
When you write
Note the difference between running __along with__ an iteration and running after an iteration.
When you write
```
python
def
_before_run
(
self
,
_
):
```
python
def
_before_run
(
self
,
_
):
return
tf
.
train
.
SessionRunArgs
(
fetches
=
my_op
)
```
```
The training loops would become
`sess.run([training_op, my_op])`
.
This is different from
`sess.run(training_op); sess.run(my_op);`
,
which is what you would get if you run the op in
`_trigger_step`
.
The training loops would become
`sess.run([training_op, my_op])`
.
This is different from
`sess.run(training_op); sess.run(my_op);`
,
which is what you would get if you run the op in
`_trigger_step`
.
*
`_trigger_step(self)`
Do something (e.g. running ops, print stuff) after each step has finished.
Be careful to only do light work here because it could affect training speed.
Do something (e.g. running ops, print stuff) after each step has finished.
Be careful to only do light work here because it could affect training speed.
*
`_trigger_epoch(self)`
Do something after each epoch has finished. Will call
`self.trigger()`
by default.
Do something after each epoch has finished. Will call
`self.trigger()`
by default.
*
`_trigger(self)`
Define something to do here without knowing how often it will get called.
By default it will get called by
`_trigger_epoch`
,
but you can customize the scheduling of this method by
[
`PeriodicTrigger`
](
../../modules/callbacks.html#tensorpack.callbacks.PeriodicTrigger
)
,
to let this method run every k steps or every k epochs.
Define something to do here without knowing how often it will get called.
By default it will get called by
`_trigger_epoch`
,
but you can customize the scheduling of this method by
[
`PeriodicTrigger`
](
../../modules/callbacks.html#tensorpack.callbacks.PeriodicTrigger
)
,
to let this method run every k steps or every k epochs.
### What you can do in the callback
...
...
docs/tutorial/inference.md
View file @
bedec8cd
...
...
@@ -6,8 +6,8 @@
There are two ways to do inference during training.
1.
The easiest way is to write a callback, and use
`
self.trainer.get_predictor()`
to get a callable under inference mode.
See
[
Write a Callback
](
extend/callback.html
)
`
self.trainer.get_predictor()`
to get a callable under inference mode.
See
[
Write a Callback
](
extend/callback.html
)
.
2.
If your inference follows the paradigm of:
"fetch some tensors for each input, and aggregate the results".
...
...
@@ -21,10 +21,10 @@ There are two ways to do inference during training.
Tensorpack doesn't care what happened after training.
It saves models to standard checkpoint format, plus a metagraph protobuf file.
They are sufficient to use with whatever deployment methods TensorFlow supports.
But you'll need to read
the
docs and do it on your own.
But you'll need to read
TF
docs and do it on your own.
The only thing tensorpack has is
`OfflinePredictor`
,
a simple function to build the graph and a callable for you.
It only runs inference on Python data, therefore may not be the best way.
It is mainly for quick demo purpose.
It only runs inference on Python data, therefore may not be the most efficient way.
Check out some examples for the usage.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment