Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
9e598322
Commit
9e598322
authored
Aug 05, 2016
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
chkpt manip
parent
a62ce63a
Changes
3
Hide whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
22 additions
and
2 deletions
+22
-2
README.md
README.md
+2
-1
examples/DoReFa-Net/alexnet-dorefa.py
examples/DoReFa-Net/alexnet-dorefa.py
+3
-1
scripts/checkpoint-manipulate.py
scripts/checkpoint-manipulate.py
+17
-0
No files found.
README.md
View file @
9e598322
...
...
@@ -24,10 +24,11 @@ You need to abstract your training task into three components:
+ Use Python to easily handle your own data format, yet still keep a good training speed thanks to multiprocess prefetch & TF Queue prefetch.
For example, InceptionV3 can run in the same speed as the official code which reads data using TF operators.
3.
The c
allbacks, including everything you want to do apart from the training iterations. Such as:
3.
C
allbacks, including everything you want to do apart from the training iterations. Such as:
+
Change hyperparameters during training
+
Print some variables of interest
+
Run inference on a test dataset
+
Run some operations once a while
With the above components defined, tensorpack trainer will run the training iterations for you.
Multi-GPU training is ready to use by simply switching the trainer.
...
...
examples/DoReFa-Net/alexnet-dorefa.py
View file @
9e598322
...
...
@@ -33,7 +33,9 @@ Accuracy:
BATCH_SIZE * NUM_GPU. With a different number of GPUs in use, things might
be a bit different, especially for learning rate.
With (W,A,G)=(32,32,32), 43.3
%
error.
With (W,A,G)=(32,32,32), 43
%
error.
With (W,A,G)=(1,2,6), 51
%
error.
With (W,A,G)=(1,2,4), 63
%
error.
Speed:
About 3.5 iteration/s on 4 Tesla M40. (Each epoch is set to 10000 iterations)
...
...
scripts/checkpoint-manipulate.py
0 → 100755
View file @
9e598322
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# File: checkpoint-manipulate.py
# Author: Yuxin Wu <ppwwyyxxc@gmail.com>
from
tensorpack.tfutils.varmanip
import
dump_chkpt_vars
import
tensorflow
as
tf
import
sys
model_path
=
sys
.
argv
[
1
]
reader
=
tf
.
train
.
NewCheckpointReader
(
model_path
)
var_names
=
reader
.
get_variable_to_shape_map
()
.
keys
()
result
=
{}
for
n
in
var_names
:
result
[
n
]
=
reader
.
get_tensor
(
n
)
import
IPython
as
IP
;
IP
.
embed
(
config
=
IP
.
terminal
.
ipapp
.
load_default_config
())
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment