Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
25583e52
Commit
25583e52
authored
Dec 06, 2016
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update example notes
parent
cc8452e5
Changes
4
Hide whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
17 additions
and
7 deletions
+17
-7
README.md
README.md
+1
-1
examples/README.md
examples/README.md
+14
-5
examples/ResNet/cifar10-resnet.py
examples/ResNet/cifar10-resnet.py
+1
-1
tensorpack/callbacks/param.py
tensorpack/callbacks/param.py
+1
-0
No files found.
README.md
View file @
25583e52
...
@@ -9,7 +9,7 @@ You can train them and reproduce the performance... not just to see how to write
...
@@ -9,7 +9,7 @@ You can train them and reproduce the performance... not just to see how to write
+
[
InceptionV3 on ImageNet
](
examples/Inception/inceptionv3.py
)
+
[
InceptionV3 on ImageNet
](
examples/Inception/inceptionv3.py
)
+
[
Fully-convolutional Network for Holistically-Nested Edge Detection(HED)
](
examples/HED
)
+
[
Fully-convolutional Network for Holistically-Nested Edge Detection(HED)
](
examples/HED
)
+
[
Spatial Transformer Network on MNIST addition
](
examples/SpatialTransformer
)
+
[
Spatial Transformer Network on MNIST addition
](
examples/SpatialTransformer
)
+
[
Generative Adversarial Network(GAN) variants
](
examples/GAN
)
+
[
Generative Adversarial Network(GAN) variants
(DCGAN,Image2Image,InfoGAN)
](
examples/GAN
)
+
[
Deep Q-Network(DQN) variants on Atari games
](
examples/Atari2600
)
+
[
Deep Q-Network(DQN) variants on Atari games
](
examples/Atari2600
)
+
[
Asynchronous Advantage Actor-Critic(A3C) with demos on OpenAI Gym
](
examples/OpenAIGym
)
+
[
Asynchronous Advantage Actor-Critic(A3C) with demos on OpenAI Gym
](
examples/OpenAIGym
)
+
[
char-rnn language model
](
examples/char-rnn
)
+
[
char-rnn language model
](
examples/char-rnn
)
...
...
examples/README.md
View file @
25583e52
...
@@ -3,14 +3,23 @@
...
@@ -3,14 +3,23 @@
Training examples with __reproducible__ and meaningful performance.
Training examples with __reproducible__ and meaningful performance.
## Vision:
+
[
An illustrative mnist example with explanation of the framework
](
mnist-convnet.py
)
+
[
An illustrative mnist example with explanation of the framework
](
mnist-convnet.py
)
+
[
A tiny SVHN ConvNet with 97.8% accuracy
](
svhn-digit-convnet.py
)
+
[
A tiny SVHN ConvNet with 97.8% accuracy
](
svhn-digit-convnet.py
)
+
[
Inception-BN with 71% accuracy
](
Inception/inception-bn.py
)
+
[
InceptionV3 with 74% accuracy (similar to the official code)
](
Inception/inceptionv3.py
)
+
[
DoReFa-Net: binary / low-bitwidth CNN on ImageNet
](
DoReFa-Net
)
+
[
DoReFa-Net: binary / low-bitwidth CNN on ImageNet
](
DoReFa-Net
)
+
[
ResNet for ImageNet/Cifar10/SVHN
](
ResNet
)
+
[
ResNet for ImageNet/Cifar10/SVHN
](
ResNet
)
+
[
Holistically-Nested Edge Detection
](
HED
)
+
[
Inception-BN with 71% accuracy
](
Inception/inception-bn.py
)
+
[
InceptionV3 with 74% accuracy (similar to the official code)
](
Inception/inceptionv3.py
)
+
[
Fully-convolutional Network for Holistically-Nested Edge Detection(HED)
](
HED
)
+
[
Spatial Transformer Networks on MNIST addition
](
SpatialTransformer
)
+
[
Spatial Transformer Networks on MNIST addition
](
SpatialTransformer
)
+
[
Generative Adversarial Networks variants
](
GAN
)
+
Load a pretrained
[
AlexNet
](
load-alexnet.py
)
or
[
VGG16
](
load-vgg16.py
)
model.
+
Reinforcement learning (DQN, A3C) on
[
Atari games
](
Atari2600
)
and
[
demos on OpenAI Gym
](
OpenAIGym
)
.
## Reinforcement Learning:
+
[
Deep Q-Network(DQN) variants on Atari games
](
Atari2600
)
+
[
Asynchronous Advantage Actor-Critic(A3C) with demos on OpenAI Gym
](
OpenAIGym
)
## Unsupervised:
+
[
Generative Adversarial Network(GAN) variants (DCGAN,Image2Image,InfoGAN)
](
examples/GAN
)
## Speech / NLP:
+
[
char-rnn for fun
](
char-rnn
)
+
[
char-rnn for fun
](
char-rnn
)
examples/ResNet/cifar10-resnet.py
View file @
25583e52
...
@@ -22,7 +22,7 @@ Identity Mappings in Deep Residual Networks, arxiv:1603.05027
...
@@ -22,7 +22,7 @@ Identity Mappings in Deep Residual Networks, arxiv:1603.05027
I can reproduce the results on 2 TitanX for
I can reproduce the results on 2 TitanX for
n=5, about 7.1
%
val error after 67k steps (8.6 step/s)
n=5, about 7.1
%
val error after 67k steps (8.6 step/s)
n=18, about 5.9
%
val error after 80k steps (2.6 step/s)
n=18, about 5.9
5
%
val error after 80k steps (2.6 step/s)
n=30: a 182-layer network, about 5.6
%
val error after 51k steps (1.55 step/s)
n=30: a 182-layer network, about 5.6
%
val error after 51k steps (1.55 step/s)
This model uses the whole training set instead of a train-val split.
This model uses the whole training set instead of a train-val split.
"""
"""
...
...
tensorpack/callbacks/param.py
View file @
25583e52
...
@@ -258,5 +258,6 @@ class StatMonitorParamSetter(HyperParamSetter):
...
@@ -258,5 +258,6 @@ class StatMonitorParamSetter(HyperParamSetter):
if
hist_max
>
hist_first
+
self
.
threshold
:
# large enough
if
hist_max
>
hist_first
+
self
.
threshold
:
# large enough
return
None
return
None
self
.
last_changed_epoch
=
self
.
epoch_num
self
.
last_changed_epoch
=
self
.
epoch_num
logger
.
info
(
"[StatMonitorParamSetter] Triggered, history: "
+
','
.
join
(
hist
))
return
self
.
value_func
(
self
.
get_current_value
())
return
self
.
value_func
(
self
.
get_current_value
())
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment