Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
a759dbb0
Commit
a759dbb0
authored
Feb 26, 2018
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update docs
parent
a9b89567
Changes
4
Show whitespace changes
Inline
Side-by-side
Showing
4 changed files
with
11 additions
and
7 deletions
+11
-7
.github/ISSUE_TEMPLATE.md
.github/ISSUE_TEMPLATE.md
+1
-1
docs/tutorial/inference.md
docs/tutorial/inference.md
+7
-5
examples/GAN/Improved-WGAN.py
examples/GAN/Improved-WGAN.py
+2
-0
tensorpack/predict/config.py
tensorpack/predict/config.py
+1
-1
No files found.
.github/ISSUE_TEMPLATE.md
View file @
a759dbb0
...
@@ -22,7 +22,7 @@ Feature Requests:
...
@@ -22,7 +22,7 @@ Feature Requests:
+
You can implement a lot of features by extending tensorpack
+
You can implement a lot of features by extending tensorpack
(See http://tensorpack.readthedocs.io/en/latest/tutorial/index.html#extend-tensorpack).
(See http://tensorpack.readthedocs.io/en/latest/tutorial/index.html#extend-tensorpack).
It does not have to be added to tensorpack unless you have a good reason.
It does not have to be added to tensorpack unless you have a good reason.
+
We don't take
example request
s.
+
We don't take
feature requests for example
s.
Usage Questions:
Usage Questions:
...
...
docs/tutorial/inference.md
View file @
a759dbb0
...
@@ -6,7 +6,8 @@
...
@@ -6,7 +6,8 @@
There are two ways to do inference during training.
There are two ways to do inference during training.
1.
The easiest way is to write a callback, and use
1.
The easiest way is to write a callback, and use
`self.trainer.get_predictor()`
to get a callable under inference mode.
[
self.trainer.get_predictor()
](
../modules/modules/train.html#tensorpack.train.TowerTrainer.get_predictor
)
to get a callable under inference mode.
See
[
Write a Callback
](
extend/callback.html
)
.
See
[
Write a Callback
](
extend/callback.html
)
.
2.
If your inference follows the paradigm of:
2.
If your inference follows the paradigm of:
...
@@ -29,15 +30,16 @@ Please note that, the metagraph saved during training is the training graph.
...
@@ -29,15 +30,16 @@ Please note that, the metagraph saved during training is the training graph.
But sometimes you need a different one for inference.
But sometimes you need a different one for inference.
For example, you may need a different data layout for CPU inference,
For example, you may need a different data layout for CPU inference,
or you may need placeholders in the inference graph, or the training graph contains multi-GPU replication
or you may need placeholders in the inference graph, or the training graph contains multi-GPU replication
which you want to remove.
which you want to remove. In fact, directly import a huge training metagraph is usually not a good idea for deployment.
In this case, you can always construct a new graph by simply:
In this case, you can always construct a new graph by simply:
```
python
```
python
a
,
b
=
tf
.
placeholder
(
...
),
tf
.
placeholder
(
...
)
a
,
b
=
tf
.
placeholder
(
...
),
tf
.
placeholder
(
...
)
# call symbolic functions on a, b
# call symbolic functions on a, b
```
```
The only tool tensorpack has for after-training inference is
`OfflinePredictor`
,
The only tool tensorpack has for after-training inference is
[
OfflinePredictor
](
../modules/predict.html#tensorpack.predict.OfflinePredictor
)
,
a simple function to build the graph and return a callable for you.
a simple function to build the graph and return a callable for you.
It is mainly for quick demo purposes.
It is mainly for quick demo purposes.
It only runs inference on
Python data
, therefore may not be the most efficient way.
It only runs inference on
numpy arrays
, therefore may not be the most efficient way.
Check out
some example
s for its usage.
Check out
examples and doc
s for its usage.
examples/GAN/Improved-WGAN.py
View file @
a759dbb0
...
@@ -4,6 +4,7 @@
...
@@ -4,6 +4,7 @@
# Author: Yuxin Wu <ppwwyyxxc@gmail.com>
# Author: Yuxin Wu <ppwwyyxxc@gmail.com>
from
tensorpack
import
*
from
tensorpack
import
*
from
tensorpack.tfutils
import
get_tf_version_number
from
tensorpack.tfutils.summary
import
add_moving_summary
from
tensorpack.tfutils.summary
import
add_moving_summary
from
tensorpack.tfutils.scope_utils
import
auto_reuse_variable_scope
from
tensorpack.tfutils.scope_utils
import
auto_reuse_variable_scope
import
tensorflow
as
tf
import
tensorflow
as
tf
...
@@ -83,6 +84,7 @@ class Model(DCGAN.Model):
...
@@ -83,6 +84,7 @@ class Model(DCGAN.Model):
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
assert
get_tf_version_number
()
>=
1.4
args
=
DCGAN
.
get_args
(
default_batch
=
64
,
default_z_dim
=
128
)
args
=
DCGAN
.
get_args
(
default_batch
=
64
,
default_z_dim
=
128
)
M
=
Model
(
shape
=
args
.
final_size
,
batch
=
args
.
batch
,
z_dim
=
args
.
z_dim
)
M
=
Model
(
shape
=
args
.
final_size
,
batch
=
args
.
batch
,
z_dim
=
args
.
z_dim
)
if
args
.
sample
:
if
args
.
sample
:
...
...
tensorpack/predict/config.py
View file @
a759dbb0
...
@@ -40,7 +40,7 @@ class PredictConfig(object):
...
@@ -40,7 +40,7 @@ class PredictConfig(object):
tensors can be any computable tensor in the graph.
tensors can be any computable tensor in the graph.
return_input (bool): same as in :attr:`PredictorBase.return_input`.
return_input (bool): same as in :attr:`PredictorBase.return_input`.
create_graph (bool): create a new graph, or use the default graph
create_graph (bool): create a new graph, or use the default graph
when
then
predictor is first initialized.
when predictor is first initialized.
You need to set either `model`, or `inputs_desc` plus `tower_func`.
You need to set either `model`, or `inputs_desc` plus `tower_func`.
"""
"""
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment