Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
36b05bb7
Commit
36b05bb7
authored
Sep 20, 2018
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Improve warning message on mismatched variables (#901)
parent
2fc3be15
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
4 additions
and
3 deletions
+4
-3
README.md
README.md
+1
-0
tensorpack/tfutils/sessinit.py
tensorpack/tfutils/sessinit.py
+3
-3
No files found.
README.md
View file @
36b05bb7
...
@@ -13,6 +13,7 @@ It's Yet Another TF high-level API, with __speed__, __readability__ and __flexib
...
@@ -13,6 +13,7 @@ It's Yet Another TF high-level API, with __speed__, __readability__ and __flexib
1.
Focus on __training speed__.
1.
Focus on __training speed__.
+
Speed comes for free with Tensorpack -- it uses TensorFlow in the __efficient way__ with no extra overhead.
+
Speed comes for free with Tensorpack -- it uses TensorFlow in the __efficient way__ with no extra overhead.
On common CNNs, it runs training
[
1.2~5x faster
](
https://github.com/tensorpack/benchmarks/tree/master/other-wrappers
)
than the equivalent Keras code.
On common CNNs, it runs training
[
1.2~5x faster
](
https://github.com/tensorpack/benchmarks/tree/master/other-wrappers
)
than the equivalent Keras code.
Your training can probably gets faster if written with Tensorpack.
+ Data-parallel multi-GPU/distributed training strategy is off-the-shelf to use.
+ Data-parallel multi-GPU/distributed training strategy is off-the-shelf to use.
It scales as well as Google's [official benchmark](https://www.tensorflow.org/performance/benchmarks).
It scales as well as Google's [official benchmark](https://www.tensorflow.org/performance/benchmarks).
...
...
tensorpack/tfutils/sessinit.py
View file @
36b05bb7
...
@@ -140,9 +140,9 @@ class SaverRestore(SessionInit):
...
@@ -140,9 +140,9 @@ class SaverRestore(SessionInit):
func
(
reader
,
name
,
v
)
func
(
reader
,
name
,
v
)
chkpt_vars_used
.
add
(
name
)
chkpt_vars_used
.
add
(
name
)
else
:
else
:
vname
=
v
.
op
.
nam
e
# use tensor name (instead of op name) for logging, to be consistent with the reverse cas
e
if
not
is_training_name
(
vname
):
if
not
is_training_name
(
v
.
name
):
mismatch
.
add
(
vname
)
mismatch
.
add
(
v
.
name
)
mismatch
.
log
()
mismatch
.
log
()
mismatch
=
MismatchLogger
(
'checkpoint'
,
'graph'
)
mismatch
=
MismatchLogger
(
'checkpoint'
,
'graph'
)
if
len
(
chkpt_vars_used
)
<
len
(
chkpt_vars
):
if
len
(
chkpt_vars_used
)
<
len
(
chkpt_vars
):
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment