Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
41759741
Commit
41759741
authored
May 14, 2018
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update docs
parent
61bb05b5
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
36 additions
and
32 deletions
+36
-32
docs/tutorial/callback.md
docs/tutorial/callback.md
+4
-4
docs/tutorial/performance-tuning.md
docs/tutorial/performance-tuning.md
+6
-3
docs/tutorial/save-load.md
docs/tutorial/save-load.md
+5
-5
examples/FasterRCNN/config.py
examples/FasterRCNN/config.py
+13
-18
examples/ImageNetModels/README.md
examples/ImageNetModels/README.md
+8
-2
No files found.
docs/tutorial/callback.md
View file @
41759741
...
@@ -47,8 +47,8 @@ callbacks=[
...
@@ -47,8 +47,8 @@ callbacks=[
GPUUtilizationTracker
(),
GPUUtilizationTracker
(),
# touch a file to pause the training and start a debug shell, to observe what's going on
# touch a file to pause the training and start a debug shell, to observe what's going on
InjectShell
(
shell
=
'ipython'
),
InjectShell
(
shell
=
'ipython'
),
# estimate time until completion
# estimate time until completion
EstimatedTimeLeft
()
EstimatedTimeLeft
()
]
+
[
# these callbacks are enabled by default already, though you can customize them
]
+
[
# these callbacks are enabled by default already, though you can customize them
# maintain those moving average summaries defined in the model (e.g. training loss, training error)
# maintain those moving average summaries defined in the model (e.g. training loss, training error)
MovingAverageSummary
(),
MovingAverageSummary
(),
...
@@ -73,9 +73,9 @@ Notice that callbacks cover every detail of training, ranging from graph operati
...
@@ -73,9 +73,9 @@ Notice that callbacks cover every detail of training, ranging from graph operati
This means you can customize every part of the training to your preference, e.g. display something
This means you can customize every part of the training to your preference, e.g. display something
different in the progress bar, evaluate part of the summaries at a different frequency, etc.
different in the progress bar, evaluate part of the summaries at a different frequency, etc.
These features
may not be always useful
, but think about how messy the main loop would look like if you
These features
are not always necessary
, but think about how messy the main loop would look like if you
were to write these logic together with the loops, and how easy your life will be if you could enable
were to write these logic together with the loops, and how easy your life will be if you could enable
these features with one line when you need them.
these features with
just
one line when you need them.
See
[
Write a callback
](
extend/callback.html
)
See
[
Write a callback
](
extend/callback.html
)
for details on how callbacks work, what they can do, and how to write them.
for details on how callbacks work, what they can do, and how to write them.
docs/tutorial/performance-tuning.md
View file @
41759741
...
@@ -2,8 +2,11 @@
...
@@ -2,8 +2,11 @@
# Performance Tuning
# Performance Tuning
__We do not know why your training is slow__
(and most of the times it's not a tensorpack problem).
__We do not know why your training is slow__
(and most of the times it's not a tensorpack problem).
Performance is different across machines and tasks,
so you need to figure out most parts by your own.
Tensorpack is designed to be high-performance, as can be seen in the
[
benchmarks
](
https://github.com/tensorpack/benchmarks
)
.
But performance is different across machines and tasks,
so you need to figure out what goes wrong by your own.
Tensorpack has some tools to make it easier to understand the performance.
Here's a list of things you can do when your training is slow.
Here's a list of things you can do when your training is slow.
If you ask for help to understand and improve the speed, PLEASE do them and include your findings.
If you ask for help to understand and improve the speed, PLEASE do them and include your findings.
...
@@ -77,6 +80,6 @@ If you're unable to scale to multiple GPUs almost linearly:
...
@@ -77,6 +80,6 @@ If you're unable to scale to multiple GPUs almost linearly:
If not, it's a bug or an environment setup problem.
If not, it's a bug or an environment setup problem.
2.
Then note that your model may have a different communication-computation pattern that affects efficiency.
2.
Then note that your model may have a different communication-computation pattern that affects efficiency.
There isn't a simple answer to this.
There isn't a simple answer to this.
You may try a different multi-GPU trainer; the speed can vary a lot
sometim
es.
You may try a different multi-GPU trainer; the speed can vary a lot
in rare cas
es.
Note that scalibility is always measured by keeping "batch size per GPU" constant.
Note that scalibility is always measured by keeping "batch size per GPU" constant.
docs/tutorial/save-load.md
View file @
41759741
...
@@ -3,9 +3,9 @@
...
@@ -3,9 +3,9 @@
## Work with TF Checkpoint
## Work with TF Checkpoint
The
`ModelSaver`
callback saves the model to
`logger.get_logger_dir()`
,
The
`ModelSaver`
callback saves the model to
the directory defined by
`logger.get_logger_dir()`
,
in TensorFlow checkpoint format.
in TensorFlow checkpoint format.
One
checkpoint typically includes a
`.data-xxxxx`
file and a
`.index`
file.
A TF
checkpoint typically includes a
`.data-xxxxx`
file and a
`.index`
file.
Both are necessary.
Both are necessary.
`tf.train.NewCheckpointReader`
is the best tool to parse TensorFlow checkpoint.
`tf.train.NewCheckpointReader`
is the best tool to parse TensorFlow checkpoint.
...
@@ -24,9 +24,9 @@ Model loading (in either training or testing) is through the `session_init` inte
...
@@ -24,9 +24,9 @@ Model loading (in either training or testing) is through the `session_init` inte
Currently there are two ways a session can be restored:
Currently there are two ways a session can be restored:
[
session_init=SaverRestore(...)
](
../modules/tfutils.html#tensorpack.tfutils.sessinit.SaverRestore
)
[
session_init=SaverRestore(...)
](
../modules/tfutils.html#tensorpack.tfutils.sessinit.SaverRestore
)
which restores a TF checkpoint,
which restores a TF checkpoint,
or
[
session_init=DictRestore(...)
](
../modules/tfutils.html#tensorpack.tfutils.sessinit.DictRestore
)
which restores a dict
or
[
session_init=DictRestore(...)
](
../modules/tfutils.html#tensorpack.tfutils.sessinit.DictRestore
)
which restores a dict
.
(
[
get_model_loader
](
../modules/tfutils.html#tensorpack.tfutils.sessinit.get_model_loader
)
[
get_model_loader
](
../modules/tfutils.html#tensorpack.tfutils.sessinit.get_model_loader
)
is a small helper to decide which one to use from a file name
)
.
is a small helper to decide which one to use from a file name.
To load multiple models, use
[
ChainInit
](
../modules/tfutils.html#tensorpack.tfutils.sessinit.ChainInit
)
.
To load multiple models, use
[
ChainInit
](
../modules/tfutils.html#tensorpack.tfutils.sessinit.ChainInit
)
.
...
...
examples/FasterRCNN/config.py
View file @
41759741
...
@@ -15,51 +15,46 @@ CLASS_NAMES = [] # NUM_CLASS strings. Will be populated later by coco loader
...
@@ -15,51 +15,46 @@ CLASS_NAMES = [] # NUM_CLASS strings. Will be populated later by coco loader
# basemodel ----------------------
# basemodel ----------------------
RESNET_NUM_BLOCK
=
[
3
,
4
,
6
,
3
]
# for resnet50
RESNET_NUM_BLOCK
=
[
3
,
4
,
6
,
3
]
# for resnet50
# RESNET_NUM_BLOCK = [3, 4, 23, 3]
# for resnet101
# RESNET_NUM_BLOCK = [3, 4, 23, 3] # for resnet101
FREEZE_AFFINE
=
False
# do not train affine parameters inside BN
FREEZE_AFFINE
=
False
# do not train affine parameters inside BN
# schedule -----------------------
# schedule -----------------------
BASE_LR
=
1e-2
BASE_LR
=
1e-2
WARMUP
=
1000
# in steps
WARMUP
=
1000
# in steps
STEPS_PER_EPOCH
=
500
STEPS_PER_EPOCH
=
500
LR_SCHEDULE
=
[
150000
,
230000
,
280000
]
# LR_SCHEDULE = [120000, 160000, 180000] # "1x" schedule in detectron
# LR_SCHEDULE = [1
20000, 160000, 180000] # "1x" schedule in detectron
# LR_SCHEDULE = [1
50000, 230000, 280000] # roughly a "1.5x" schedule
#
LR_SCHEDULE = [240000, 320000, 360000] # "2x" schedule in detectron
LR_SCHEDULE
=
[
240000
,
320000
,
360000
]
# "2x" schedule in detectron
# image resolution --------------------
# image resolution --------------------
SHORT_EDGE_SIZE
=
800
SHORT_EDGE_SIZE
=
800
MAX_SIZE
=
1333
MAX_SIZE
=
1333
#
a
lternative (worse & faster) setting: 600, 1024
#
A
lternative (worse & faster) setting: 600, 1024
# anchors -------------------------
# anchors -------------------------
ANCHOR_STRIDE
=
16
ANCHOR_STRIDE
=
16
ANCHOR_STRIDES_FPN
=
(
4
,
8
,
16
,
32
,
64
)
ANCHOR_STRIDES_FPN
=
(
4
,
8
,
16
,
32
,
64
)
# strides for each FPN level. Must be the same length as ANCHOR_SIZES
# sqrtarea of the anchor box
ANCHOR_SIZES
=
(
32
,
64
,
128
,
256
,
512
)
# sqrtarea of the anchor box
ANCHOR_SIZES
=
(
32
,
64
,
128
,
256
,
512
)
ANCHOR_RATIOS
=
(
0.5
,
1.
,
2.
)
ANCHOR_RATIOS
=
(
0.5
,
1.
,
2.
)
NUM_ANCHOR
=
len
(
ANCHOR_SIZES
)
*
len
(
ANCHOR_RATIOS
)
NUM_ANCHOR
=
len
(
ANCHOR_SIZES
)
*
len
(
ANCHOR_RATIOS
)
POSITIVE_ANCHOR_THRES
=
0.7
POSITIVE_ANCHOR_THRES
=
0.7
NEGATIVE_ANCHOR_THRES
=
0.3
NEGATIVE_ANCHOR_THRES
=
0.3
# just to avoid too large numbers.
BBOX_DECODE_CLIP
=
np
.
log
(
MAX_SIZE
/
16.0
)
# to avoid too large numbers.
BBOX_DECODE_CLIP
=
np
.
log
(
MAX_SIZE
/
16.0
)
# rpn training -------------------------
# rpn training -------------------------
# fg ratio among selected RPN anchors
RPN_FG_RATIO
=
0.5
# fg ratio among selected RPN anchors
RPN_FG_RATIO
=
0.5
RPN_BATCH_PER_IM
=
256
# total (across FPN levels) number of anchors that are marked valid
RPN_BATCH_PER_IM
=
256
RPN_MIN_SIZE
=
0
RPN_MIN_SIZE
=
0
RPN_PROPOSAL_NMS_THRESH
=
0.7
RPN_PROPOSAL_NMS_THRESH
=
0.7
TRAIN_PRE_NMS_TOPK
=
12000
TRAIN_PRE_NMS_TOPK
=
12000
TRAIN_POST_NMS_TOPK
=
2000
TRAIN_POST_NMS_TOPK
=
2000
# boxes overlapping crowd will be ignored.
CROWD_OVERLAP_THRES
=
0.7
# boxes overlapping crowd will be ignored.
CROWD_OVERLAP_THRES
=
0.7
# fastrcnn training ---------------------
# fastrcnn training ---------------------
FASTRCNN_BATCH_PER_IM
=
256
FASTRCNN_BATCH_PER_IM
=
512
FASTRCNN_BBOX_REG_WEIGHTS
=
np
.
array
([
10
,
10
,
5
,
5
],
dtype
=
'float32'
)
FASTRCNN_BBOX_REG_WEIGHTS
=
np
.
array
([
10
,
10
,
5
,
5
],
dtype
=
'float32'
)
FASTRCNN_FG_THRESH
=
0.5
FASTRCNN_FG_THRESH
=
0.5
# fg ratio in a ROI batch
FASTRCNN_FG_RATIO
=
0.25
# fg ratio in a ROI batch
FASTRCNN_FG_RATIO
=
0.25
# testing -----------------------
# testing -----------------------
TEST_PRE_NMS_TOPK
=
6000
TEST_PRE_NMS_TOPK
=
6000
...
...
examples/ImageNetModels/README.md
View file @
41759741
...
@@ -28,8 +28,14 @@ Evaluate the [pretrained model](http://models.tensorpack.com/ShuffleNet/):
...
@@ -28,8 +28,14 @@ Evaluate the [pretrained model](http://models.tensorpack.com/ShuffleNet/):
This Inception-BN script reaches 27% single-crop error after 300k steps with 6 GPUs.
This Inception-BN script reaches 27% single-crop error after 300k steps with 6 GPUs.
This VGG16 script, when trained with 32x8 batch size, reaches 29~30% single-crop error after 100 epochs (30h with 8 P100s),
This VGG16 script, when trained with 32x8 batch size, reaches the following
28% with BN, and 27.6% with GN.
error rate after 100 epochs (30h with 8 P100s). This reproduces the VGG
experiements in the paper
[
Group Normalization
](
https://arxiv.org/abs/1803.08494
)
.
| No Normalization | Batch Normalization | Group Normalization |
|:---------------------------------|---------------------|--------------------:|
| 29~30% (varies with random seed) | 28% | 27.6% |
### ResNet, DoReFa-Net
### ResNet, DoReFa-Net
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment