Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
aa1f82f7
Commit
aa1f82f7
authored
May 13, 2019
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
fix bug in
f6ede612
parent
eafe564b
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
6 additions
and
4 deletions
+6
-4
tensorpack/models/batch_norm.py
tensorpack/models/batch_norm.py
+6
-4
No files found.
tensorpack/models/batch_norm.py
View file @
aa1f82f7
...
...
@@ -163,7 +163,12 @@ def BatchNorm(inputs, axis=None, training=None, momentum=0.9, epsilon=1e-5,
don't want to update it.
2. As long as `training=True`, `sync_statistics` and `ema_update` option will take effect.
"""
# parse training/ctx
ctx
=
get_current_tower_context
()
if
training
is
None
:
training
=
ctx
.
is_training
training
=
bool
(
training
)
# parse shapes
data_format
=
get_data_format
(
data_format
,
keras_mode
=
False
)
shape
=
inputs
.
get_shape
()
.
as_list
()
...
...
@@ -200,10 +205,6 @@ def BatchNorm(inputs, axis=None, training=None, momentum=0.9, epsilon=1e-5,
TF_version
=
get_tf_version_tuple
()
# parse training/ctx
if
training
is
None
:
training
=
ctx
.
is_training
training
=
bool
(
training
)
freeze_bn_backward
=
not
training
and
ctx
.
is_training
if
freeze_bn_backward
:
assert
TF_version
>=
(
1
,
4
),
\
...
...
@@ -212,6 +213,7 @@ def BatchNorm(inputs, axis=None, training=None, momentum=0.9, epsilon=1e-5,
logger
.
warn
(
"[BatchNorm] Using moving_mean/moving_variance in training."
)
# Using moving_mean/moving_variance in training, which means we
# loaded a pre-trained BN and only fine-tuning the affine part.
do_sync_bn
=
(
sync_statistics
is
not
None
)
and
training
if
not
do_sync_bn
:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment