Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
f6ede612
Commit
f6ede612
authored
May 11, 2019
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Better BatchNorm (with ema_update option decoupled from training)
parent
4a46b93d
Changes
2
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
72 additions
and
43 deletions
+72
-43
examples/GAN/GAN.py
examples/GAN/GAN.py
+2
-2
tensorpack/models/batch_norm.py
tensorpack/models/batch_norm.py
+70
-41
No files found.
examples/GAN/GAN.py
View file @
f6ede612
...
@@ -169,8 +169,8 @@ class SeparateGANTrainer(TowerTrainer):
...
@@ -169,8 +169,8 @@ class SeparateGANTrainer(TowerTrainer):
# Build the graph
# Build the graph
self
.
tower_func
=
TowerFuncWrapper
(
model
.
build_graph
,
model
.
get_input_signature
())
self
.
tower_func
=
TowerFuncWrapper
(
model
.
build_graph
,
model
.
get_input_signature
())
with
TowerContext
(
''
,
is_training
=
True
),
\
with
TowerContext
(
''
,
is_training
=
True
),
\
argscope
(
BatchNorm
,
internal_update
=
True
):
argscope
(
BatchNorm
,
ema_update
=
'internal'
):
# should not hook the updates to both train_op, it will hurt training speed.
# should not hook the
EMA
updates to both train_op, it will hurt training speed.
self
.
tower_func
(
*
input
.
get_input_tensors
())
self
.
tower_func
(
*
input
.
get_input_tensors
())
update_ops
=
tf
.
get_collection
(
tf
.
GraphKeys
.
UPDATE_OPS
)
update_ops
=
tf
.
get_collection
(
tf
.
GraphKeys
.
UPDATE_OPS
)
if
len
(
update_ops
):
if
len
(
update_ops
):
...
...
tensorpack/models/batch_norm.py
View file @
f6ede612
This diff is collapsed.
Click to expand it.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment