Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
035f597d
Commit
035f597d
authored
Nov 09, 2017
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update docs
parent
5a461be1
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
46 additions
and
26 deletions
+46
-26
.github/ISSUE_TEMPLATE.md
.github/ISSUE_TEMPLATE.md
+2
-0
docs/tutorial/faq.md
docs/tutorial/faq.md
+7
-26
docs/tutorial/index.rst
docs/tutorial/index.rst
+1
-0
docs/tutorial/save-load.md
docs/tutorial/save-load.md
+34
-0
tensorpack/callbacks/steps.py
tensorpack/callbacks/steps.py
+2
-0
No files found.
.github/ISSUE_TEMPLATE.md
View file @
035f597d
...
...
@@ -20,3 +20,5 @@ Usage Questions, e.g.:
"Why certain examples need to be written in this way?"
We don't answer general machine learning questions like:
"I want to do [this machine learning task]. What specific things do I need to do?"
You can also use gitter (https://gitter.im/tensorpack/users) for more casual discussions.
docs/tutorial/faq.md
View file @
035f597d
...
...
@@ -16,35 +16,16 @@ If you think:
Then it is a good time to open an issue.
## How to
dump/inspect a model
## How to
print/dump intermediate results in training
When you enable
`ModelSaver`
as a callback,
trained models will be stored in TensorFlow checkpoint format, which typically includes a
`.data-xxxxx`
file and a
`.index`
file. Both are necessary.
1.
Learn
`tf.Print`
.
To inspect a checkpoint, the easiest tool is
`tf.train.NewCheckpointReader`
. Please note that it
expects a model path without the extension.
2.
Know
[
DumpTensors
](
http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.DumpTensors[]
)
,
[
ProcessTensors
](
http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.ProcessTensors
)
callbacks.
And it's also easy to write your own version of them.
You can dump a cleaner version of the model (without unnecessary variables), using
`scripts/dump-model-params.py`
, as a simple
`var-name: value`
dict saved in npy/npz format.
The script expects a metagraph file which is also saved by
`ModelSaver`
.
## How to load a model / do transfer learning
All model loading (in either training or testing) is through the
`session_init`
initializer
in
`TrainConfig`
or
`PredictConfig`
.
The common choices for this option are
`SaverRestore`
which restores a
TF checkpoint, or
`DictRestore`
which restores a dict. (
`get_model_loader`
is a small helper to
decide which one to use from a file name.)
Doing transfer learning is trivial.
Variable restoring is completely based on name match between
the current graph and the
`SessionInit`
initializer.
Therefore, if you want to load some model, just use the same variable name
so the old value will be loaded into the variable.
If you want to re-train some layer, just rename it.
Unmatched variables on both sides will be printed as a warning.
3.
The
[
ProgressBar
](
http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.ProgressBar
)
callback can print some scalar statistics, though not enabled by default.
## How to freeze some variables in training
...
...
docs/tutorial/index.rst
View file @
035f597d
...
...
@@ -43,6 +43,7 @@ User Tutorials
trainer
training-interface
callback
save-load
summary
faq
...
...
docs/tutorial/save-load.md
0 → 100644
View file @
035f597d
# Save and Load models
## Work with TF Checkpoint
The
`ModelSaver`
callback saves the model to
`logger.get_logger_dir()`
,
in TensorFlow checkpoint format.
One checkpoint typically includes a
`.data-xxxxx`
file and a
`.index`
file.
Both are necessary.
To inspect a checkpoint, the easiest tool is
`tf.train.NewCheckpointReader`
.
For example,
[
scripts/ls-checkpoint.py
](
../scripts/ls-checkpoint.py
)
uses it to print all variables and their shapes in a checkpoint.
[
scripts/dump-model-params.py
](
../scripts/dump-model-params.py
)
can be used to remove unnecessary variables in a checkpoint.
It takes a metagraph file (which is also saved by
`ModelSaver`
) and only saves variables that the model needs at inference time.
It can dump the model to a
`var-name: value`
dict saved in npy/npz format.
## Load a Model
Model loading (in either training or testing) is through the
`session_init`
interface.
Currently there are two ways a session can be restored:
`session_init=SaverRestore(...)`
which restores a
TF checkpoint, or
`session_init=DictRestore(...)`
which restores a dict.
(
`get_model_loader`
is a small helper to decide which one to use from a file name.)
Variable restoring is completely based on name match between
variables in the current graph and variables in the
`session_init`
initializer.
Variables that appear in only one side will be printed as warning.
## Transfer Learning
Therefore, transfer learning is trivial.
If you want to load some model, just use the same variable names.
If you want to re-train some layer, just rename it.
tensorpack/callbacks/steps.py
View file @
035f597d
...
...
@@ -72,6 +72,8 @@ class ProgressBar(Callback):
self
.
_fetches
=
self
.
get_tensors_maybe_in_tower
(
self
.
_names
)
or
None
if
self
.
_fetches
:
for
t
in
self
.
_fetches
:
assert
t
.
shape
.
ndims
==
0
,
"ProgressBar can only print scalars, not {}"
.
format
(
t
)
self
.
_fetches
=
tf
.
train
.
SessionRunArgs
(
self
.
_fetches
)
self
.
_tqdm_args
[
'bar_format'
]
=
self
.
_tqdm_args
[
'bar_format'
]
+
"{postfix} "
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment