Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
2010c43d
Commit
2010c43d
authored
May 05, 2018
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
trigger rtfd
parent
045f3352
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
12 additions
and
11 deletions
+12
-11
docs/requirements.txt
docs/requirements.txt
+1
-1
docs/tutorial/trainer.md
docs/tutorial/trainer.md
+7
-8
tensorpack/graph_builder/model_desc.py
tensorpack/graph_builder/model_desc.py
+4
-2
No files found.
docs/requirements.txt
View file @
2010c43d
...
...
@@ -5,4 +5,4 @@ Sphinx>=1.6
recommonmark==0.4.0
sphinx_rtd_theme
mock
tensorflow
==1.5.0
tensorflow
docs/tutorial/trainer.md
View file @
2010c43d
...
...
@@ -31,18 +31,17 @@ The tower function needs to follow some conventions:
*
Only put variables __trainable by gradient descent__ into
`TRAINABLE_VARIABLES`
.
*
Put variables that need to be saved into
`MODEL_VARIABLES`
.
3.
It has to respect variable scopes:
*
The name of any trainable variables created in the function must be like "variable_scope_name/
variable
/name".
*
The name of any trainable variables created in the function must be like "variable_scope_name/
custom
/name".
Don't depend on name_scope's name. Don't use variable_scope's name twice.
*
The creation of any trainable variables must respect variable reuse.
To respect variable reuse, use
`tf.get_variable`
instead of
`tf.Variable`
in the function.
For non-trainable variables, it's OK to use
`tf.Variable`
to force creation of new variables in each tower.
4.
It will always be called under a
`TowerContext`
.
which will contain information about training/inference mode, reuse, etc.
*
The creation of any trainable variables must respect __reuse__ variable scope.
To respect variable reuse, use
`tf.get_variable`
instead of
`tf.Variable`
in the function.
On the other hand, for non-trainable variables, it's OK to use
`tf.Variable`
to force creation of new variables in each tower.
4.
It will always be called under a
`TowerContext`
, which can be accessed by
`get_current_tower_contxt()`
.
The context contains information about training/inference mode, reuse, etc.
These conventions are easy to follow, and most layer wrappers (e.g.,
tf.layers/slim/tensorlayer) do follow them. Note that certain Keras layers do not
follow these conventions and
may crash
if used within tensorpack.
follow these conventions and
will need some workarounds
if used within tensorpack.
It's possible to write ones that are not, but all existing trainers in
tensorpack are subclass of
[
TowerTrainer
](
../modules/train.html#tensorpack.train.TowerTrainer
)
.
...
...
tensorpack/graph_builder/model_desc.py
View file @
2010c43d
...
...
@@ -175,7 +175,9 @@ class ModelDesc(ModelDescBase):
1. :meth:`build_graph(...)` method should return a cost when called under a training context.
The cost will be the final cost to be optimized by the optimizer.
Therefore it should include necessary regularization.
2. Subclass is expected to implement :meth:`optimizer()` method.
"""
def
get_cost
(
self
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment