Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
b5a238a7
Commit
b5a238a7
authored
Apr 21, 2016
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update latest numbers
parent
9380b5fd
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
8 additions
and
7 deletions
+8
-7
examples/ResNet/README.md
examples/ResNet/README.md
+1
-0
examples/ResNet/cifar10-resnet.py
examples/ResNet/cifar10-resnet.py
+5
-5
tensorpack/models/batch_norm.py
tensorpack/models/batch_norm.py
+2
-2
No files found.
examples/ResNet/README.md
View file @
b5a238a7
...
@@ -2,5 +2,6 @@
...
@@ -2,5 +2,6 @@
## ResNet
## ResNet
Implements the paper "Deep Residual Learning for Image Recognition",
[
http://arxiv.org/abs/1512.03385
](
http://arxiv.org/abs/1512.03385
)
Implements the paper "Deep Residual Learning for Image Recognition",
[
http://arxiv.org/abs/1512.03385
](
http://arxiv.org/abs/1512.03385
)
with the variants proposed in "Identity Mappings in Deep Residual Networks",
[
https://arxiv.org/abs/1603.05027
](
https://arxiv.org/abs/1603.05027
)
.


examples/ResNet/cifar10-resnet.py
View file @
b5a238a7
...
@@ -24,10 +24,10 @@ This implementation uses the variants proposed in:
...
@@ -24,10 +24,10 @@ This implementation uses the variants proposed in:
Identity Mappings in Deep Residual Networks, arxiv:1603.05027
Identity Mappings in Deep Residual Networks, arxiv:1603.05027
I can reproduce the results for
I can reproduce the results for
n=5, about 7.
2
%
val error after 93k step with 2 TitanX (6.8
it/s)
n=5, about 7.
1
%
val error after 67k step with 2 TitanX (6.1
it/s)
n=18, about 6.0
5
%
val error after 62k step with 2 TitanX (about 10hr)
n=18, about 6.0
%
val error after 62k step with 2 TitanX (about 10hr)
n=30: a 182-layer network, about 5.
5
%
val error after 51k step with 2 GPUs
n=30: a 182-layer network, about 5.
6
%
val error after 51k step with 2 GPUs
This model uses the whole training set instead of a
95:5
train-val split.
This model uses the whole training set instead of a train-val split.
"""
"""
BATCH_SIZE
=
128
BATCH_SIZE
=
128
...
@@ -168,7 +168,7 @@ def get_config():
...
@@ -168,7 +168,7 @@ def get_config():
[(
1
,
0.1
),
(
82
,
0.01
),
(
123
,
0.001
),
(
300
,
0.0002
)])
[(
1
,
0.1
),
(
82
,
0.01
),
(
123
,
0.001
),
(
300
,
0.0002
)])
]),
]),
session_config
=
sess_config
,
session_config
=
sess_config
,
model
=
Model
(
n
=
5
),
model
=
Model
(
n
=
18
),
step_per_epoch
=
step_per_epoch
,
step_per_epoch
=
step_per_epoch
,
max_epoch
=
500
,
max_epoch
=
500
,
)
)
...
...
tensorpack/models/batch_norm.py
View file @
b5a238a7
...
@@ -12,10 +12,10 @@ __all__ = ['BatchNorm']
...
@@ -12,10 +12,10 @@ __all__ = ['BatchNorm']
# http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow
# http://stackoverflow.com/questions/33949786/how-could-i-use-batch-normalization-in-tensorflow
# TF batch_norm only works for 4D tensor right now: #804
# TF batch_norm only works for 4D tensor right now: #804
# decay: being too close to 1 leads to slow start-up
, but ends up better
# decay: being too close to 1 leads to slow start-up
. torch use 0.9.
# eps: torch: 1e-5. Lasagne: 1e-4
# eps: torch: 1e-5. Lasagne: 1e-4
@
layer_register
(
log_shape
=
False
)
@
layer_register
(
log_shape
=
False
)
def
BatchNorm
(
x
,
use_local_stat
=
True
,
decay
=
0.9
99
,
epsilon
=
1e-5
):
def
BatchNorm
(
x
,
use_local_stat
=
True
,
decay
=
0.9
,
epsilon
=
1e-5
):
"""
"""
Batch normalization layer as described in:
Batch normalization layer as described in:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment