Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
a2c36b3d
Commit
a2c36b3d
authored
Apr 25, 2018
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
misc docs update; use virtual_batch_size only for TF>=1.5 (fix #737)
parent
03f18976
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
30 additions
and
14 deletions
+30
-14
examples/DoReFa-Net/alexnet-dorefa.py
examples/DoReFa-Net/alexnet-dorefa.py
+3
-1
examples/basics/cifar-convnet.py
examples/basics/cifar-convnet.py
+1
-1
tensorpack/models/batch_norm.py
tensorpack/models/batch_norm.py
+22
-10
tests/dev/git-hooks/pre-commit
tests/dev/git-hooks/pre-commit
+3
-2
tox.ini
tox.ini
+1
-0
No files found.
examples/DoReFa-Net/alexnet-dorefa.py
View file @
a2c36b3d
...
@@ -41,7 +41,9 @@ Accuracy:
...
@@ -41,7 +41,9 @@ Accuracy:
With (W,A,G)=(1,2,6), 47.6
%
error
With (W,A,G)=(1,2,6), 47.6
%
error
With (W,A,G)=(1,2,4), 58.4
%
error
With (W,A,G)=(1,2,4), 58.4
%
error
Don't train with >4 GPUs because the batch size will be different.
Training with 2 or 8 GPUs is supported but the result may get slightly
different, due to limited per-GPU batch size.
You may want to adjust total batch size and learning rate accordingly.
Speed:
Speed:
About 11 iteration/s on 4 P100s. (Each epoch is set to 10000 iterations)
About 11 iteration/s on 4 P100s. (Each epoch is set to 10000 iterations)
...
...
examples/basics/cifar-convnet.py
View file @
a2c36b3d
...
@@ -15,7 +15,7 @@ A small convnet model for Cifar10 or Cifar100 dataset.
...
@@ -15,7 +15,7 @@ A small convnet model for Cifar10 or Cifar100 dataset.
Cifar10 trained on 1 GPU:
Cifar10 trained on 1 GPU:
91
%
accuracy after 50k iterations.
91
%
accuracy after 50k iterations.
7
0
itr/s on P100
7
9
itr/s on P100
Not a good model for Cifar100, just for demonstration.
Not a good model for Cifar100, just for demonstration.
"""
"""
...
...
tensorpack/models/batch_norm.py
View file @
a2c36b3d
...
@@ -89,8 +89,9 @@ def BatchNorm(inputs, axis=None, training=None, momentum=0.9, epsilon=1e-5,
...
@@ -89,8 +89,9 @@ def BatchNorm(inputs, axis=None, training=None, momentum=0.9, epsilon=1e-5,
if
training
is
None
:
if
training
is
None
:
training
=
ctx
.
is_training
training
=
ctx
.
is_training
training
=
bool
(
training
)
training
=
bool
(
training
)
TF_version
=
get_tf_version_number
()
if
not
training
and
ctx
.
is_training
:
if
not
training
and
ctx
.
is_training
:
assert
get_tf_version_number
()
>=
1.4
,
\
assert
TF_version
>=
1.4
,
\
"Fine tuning a BatchNorm model with fixed statistics is only "
\
"Fine tuning a BatchNorm model with fixed statistics is only "
\
"supported after https://github.com/tensorflow/tensorflow/pull/12580 "
"supported after https://github.com/tensorflow/tensorflow/pull/12580 "
if
ctx
.
is_main_training_tower
:
# only warn in first tower
if
ctx
.
is_main_training_tower
:
# only warn in first tower
...
@@ -102,15 +103,26 @@ def BatchNorm(inputs, axis=None, training=None, momentum=0.9, epsilon=1e-5,
...
@@ -102,15 +103,26 @@ def BatchNorm(inputs, axis=None, training=None, momentum=0.9, epsilon=1e-5,
with
rename_get_variable
(
with
rename_get_variable
(
{
'moving_mean'
:
'mean/EMA'
,
{
'moving_mean'
:
'mean/EMA'
,
'moving_variance'
:
'variance/EMA'
}):
'moving_variance'
:
'variance/EMA'
}):
layer
=
tf
.
layers
.
BatchNormalization
(
if
TF_version
>=
1.5
:
axis
=
axis
,
layer
=
tf
.
layers
.
BatchNormalization
(
momentum
=
momentum
,
epsilon
=
epsilon
,
axis
=
axis
,
center
=
center
,
scale
=
scale
,
momentum
=
momentum
,
epsilon
=
epsilon
,
beta_initializer
=
beta_initializer
,
center
=
center
,
scale
=
scale
,
gamma_initializer
=
gamma_initializer
,
beta_initializer
=
beta_initializer
,
virtual_batch_size
=
virtual_batch_size
,
gamma_initializer
=
gamma_initializer
,
fused
=
True
virtual_batch_size
=
virtual_batch_size
,
)
fused
=
True
)
else
:
assert
virtual_batch_size
is
None
,
"Feature not supported in this version of TF!"
layer
=
tf
.
layers
.
BatchNormalization
(
axis
=
axis
,
momentum
=
momentum
,
epsilon
=
epsilon
,
center
=
center
,
scale
=
scale
,
beta_initializer
=
beta_initializer
,
gamma_initializer
=
gamma_initializer
,
fused
=
True
)
xn
=
layer
.
apply
(
inputs
,
training
=
training
,
scope
=
tf
.
get_variable_scope
())
xn
=
layer
.
apply
(
inputs
,
training
=
training
,
scope
=
tf
.
get_variable_scope
())
# maintain EMA only on one GPU is OK, even in replicated mode.
# maintain EMA only on one GPU is OK, even in replicated mode.
...
...
tests/dev/git-hooks/pre-commit
View file @
a2c36b3d
...
@@ -6,8 +6,9 @@ GIT_ARG="--git-dir ../.git --work-tree .."
...
@@ -6,8 +6,9 @@ GIT_ARG="--git-dir ../.git --work-tree .."
# find out modified python files, so that we ignored unstaged files
# find out modified python files, so that we ignored unstaged files
# exclude ../docs
# exclude ../docs
MOD
=
$(
git
$GIT_ARG
status
-s
|
grep
-E
'\.py$'
\
MOD
=
$(
git
$GIT_ARG
status
-s
\
|
grep
-E
'^\b+M\b+|^A'
|
cut
-c
4- |
grep
-v
'../docs'
)
|
grep
-E
'\.py$'
|
grep
-v
'../docs'
\
|
grep
-E
'^ *M|^ *A'
|
cut
-c
4-
)
if
[[
-n
$MOD
]]
;
then
if
[[
-n
$MOD
]]
;
then
flake8
$MOD
flake8
$MOD
fi
fi
tox.ini
View file @
a2c36b3d
...
@@ -4,6 +4,7 @@ ignore = E265,E741,E742,E743
...
@@ -4,6 +4,7 @@ ignore = E265,E741,E742,E743
exclude
=
.git,
exclude
=
.git,
__init__.py,
__init__.py,
setup.py,
setup.py,
tensorpack/train/eager.py,
docs,
docs,
examples,
examples,
docs/conf.py
docs/conf.py
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment