Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
30ead05b
Commit
30ead05b
authored
Jan 28, 2019
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
optimze_for_inference can fail (#1064)
parent
21c49469
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
25 additions
and
13 deletions
+25
-13
.github/ISSUE_TEMPLATE/unexpected-problems---bugs.md
.github/ISSUE_TEMPLATE/unexpected-problems---bugs.md
+7
-4
docs/tutorial/inference.md
docs/tutorial/inference.md
+4
-0
tensorpack/tfutils/export.py
tensorpack/tfutils/export.py
+14
-9
No files found.
.github/ISSUE_TEMPLATE/unexpected-problems---bugs.md
View file @
30ead05b
...
...
@@ -4,8 +4,11 @@ about: Report unexpected problems about Tensorpack or its examples.
---
If you're asking about an unexpected problem you met, use this template.
__PLEASE DO NOT DELETE THIS TEMPLATE, FILL IT__
:
If you're asking about an unexpected problem which you do not know the root cause,
use this template. __PLEASE DO NOT DELETE THIS TEMPLATE, FILL IT__:
If you already know the root cause to your problem,
feel free to delete everything in this template.
### 1. What you did:
...
...
@@ -54,5 +57,5 @@ not our responsibility to figure out.
using an IDE or jupyter notebook), please retry under a normal command line shell.
+
Hardware information, e.g. number of GPUs used.
Feel free to add
extra information related to your issue, but
please try to provide the above information __accurately__ to save effort in the investigation.
You may often want to provide
extra information related to your issue, but
at the minimum
please try to provide the above information __accurately__ to save effort in the investigation.
docs/tutorial/inference.md
View file @
30ead05b
...
...
@@ -99,6 +99,10 @@ demonstrates the usage of such a frozen/pruned graph.
Again, you may often want to use a different graph for inference and you can
do so by the arguments of
`PredictConfig`
.
Note that the exporter relies on TensorFlow's automatic graph transformation, which do not always work reliably.
Automated graph transformation is often suboptimal or sometimes fail.
It's safer to write the graph by yourself.
## Inference After Training: Do It Yourself
...
...
tensorpack/tfutils/export.py
View file @
30ead05b
...
...
@@ -34,16 +34,20 @@ class ModelExporter(object):
super
(
ModelExporter
,
self
)
.
__init__
()
self
.
config
=
config
def
export_compact
(
self
,
filename
,
toco_compatible
=
False
):
def
export_compact
(
self
,
filename
,
optimize
=
True
,
toco_compatible
=
False
):
"""Create a self-contained inference-only graph and write final graph (in pb format) to disk.
Args:
filename (str): path to the output graph
optimize (bool): whether to use TensorFlow's `optimize_for_inference`
to prune and optimize the graph. This does not work on all types of graphs.
toco_compatible (bool): See TensorFlow's
`optimize_for_inference
<https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/optimize_for_inference.py>`_
for details. Only available after TF 1.8.
"""
if
toco_compatible
:
assert
optimize
,
"toco_compatible is only effective when optimize=True!"
self
.
graph
=
self
.
config
.
_maybe_create_graph
()
with
self
.
graph
.
as_default
():
input
=
PlaceholderInput
()
...
...
@@ -70,8 +74,9 @@ class ModelExporter(object):
variable_names_blacklist
=
None
)
# prune unused nodes from graph
if
optimize
:
toco_args
=
()
if
get_tf_version_tuple
()
<
(
1
,
8
)
else
(
toco_compatible
,
)
pruned
_graph_def
=
optimize_for_inference_lib
.
optimize_for_inference
(
frozen
_graph_def
=
optimize_for_inference_lib
.
optimize_for_inference
(
frozen_graph_def
,
[
n
.
name
[:
-
2
]
for
n
in
input_tensors
],
[
n
.
name
[:
-
2
]
for
n
in
output_tensors
],
...
...
@@ -79,7 +84,7 @@ class ModelExporter(object):
*
toco_args
)
with
gfile
.
FastGFile
(
filename
,
"wb"
)
as
f
:
f
.
write
(
pruned
_graph_def
.
SerializeToString
())
f
.
write
(
frozen
_graph_def
.
SerializeToString
())
logger
.
info
(
"Output graph written to {}."
.
format
(
filename
))
def
export_serving
(
self
,
filename
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment