Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
fe049874
Commit
fe049874
authored
Feb 05, 2018
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Prefer fused_batch_norm in bn inference.
parent
a46de342
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
5 additions
and
5 deletions
+5
-5
tensorpack/models/batch_norm.py
tensorpack/models/batch_norm.py
+5
-5
No files found.
tensorpack/models/batch_norm.py
View file @
fe049874
...
...
@@ -151,11 +151,11 @@ def BatchNorm(x, use_local_stat=None, decay=0.9, epsilon=1e-5,
mean
=
moving_mean
,
variance
=
moving_var
,
epsilon
=
epsilon
,
data_format
=
data_format
,
is_training
=
False
)
else
:
# non-fused op is faster for inference # TODO test if this is still true
if
ndims
==
4
and
data_format
==
'NCHW'
:
[
g
,
b
,
mm
,
mv
]
=
[
reshape_for_bn
(
_
,
ndims
,
n_out
,
data_format
)
for
_
in
[
gamma
,
beta
,
moving_mean
,
moving_var
]]
xn
=
tf
.
nn
.
batch_normalization
(
x
,
mm
,
mv
,
b
,
g
,
epsilon
)
if
ndims
==
4
:
xn
,
_
,
_
=
tf
.
nn
.
fused_batch_norm
(
x
,
gamma
,
beta
,
mean
=
moving_mean
,
variance
=
moving_var
,
epsilon
=
epsilon
,
data_format
=
data_format
,
is_training
=
False
)
else
:
# avoid the reshape if possible (when channel is the last dimension)
xn
=
tf
.
nn
.
batch_normalization
(
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment