Commit 101d7aa5 authored by Yuxin Wu's avatar Yuxin Wu

update docs; fix predictor

parent b673b24c
An issue has to be one of the following: An issue has to be one of the following:
1. Unexpected Problems / Potential Bugs - [ ] Unexpected Problems / Potential Bugs
2. Feature Requests - [ ] Feature Requests
3. Usage Questions - [ ] Questions on Using/Understanding Tensorpack
For any unexpected problems, __PLEASE ALWAYS INCLUDE__: For any unexpected problems, __PLEASE ALWAYS INCLUDE__:
1. What you did: 1. What you did:
...@@ -20,19 +20,24 @@ For any unexpected problems, __PLEASE ALWAYS INCLUDE__: ...@@ -20,19 +20,24 @@ For any unexpected problems, __PLEASE ALWAYS INCLUDE__:
+ Tensorpack version: `python -c 'import tensorpack; print(tensorpack.__version__)'`. + Tensorpack version: `python -c 'import tensorpack; print(tensorpack.__version__)'`.
You can install Tensorpack master by `pip install -U git+https://github.com/ppwwyyxx/tensorpack.git`.: You can install Tensorpack master by `pip install -U git+https://github.com/ppwwyyxx/tensorpack.git`.:
+ Hardware information, if relevant. + Hardware information, if relevant.
5. About efficiency, PLEASE first read http://tensorpack.readthedocs.io/en/latest/tutorial/performance-tuning.html
For efficiency issues, PLEASE first read http://tensorpack.readthedocs.io/en/latest/tutorial/performance-tuning.html
Feature Requests: Feature Requests:
+ You can implement a lot of features by extending tensorpack + You can implement a lot of features by extending Tensorpack
(See http://tensorpack.readthedocs.io/en/latest/tutorial/index.html#extend-tensorpack). (See http://tensorpack.readthedocs.io/en/latest/tutorial/index.html#extend-tensorpack).
It does not have to be added to tensorpack unless you have a good reason. It does not have to be added to Tensorpack unless you have a good reason.
+ We don't take feature requests for examples or implementing papers. + "Could you improve/implement an example/paper ?"
-- the answer is: we don't know, and we don't take feature requests for
examples. You should do it yourself with Tensorpack. If you don't know how to
do it, you may ask a usage question.
Usage Questions: Usage Questions:
+ Read the [tutorials](http://tensorpack.readthedocs.io/en/latest/tutorial/index.html#user-tutorials) first. + Read the [tutorials](http://tensorpack.readthedocs.io/en/latest/tutorial/index.html#user-tutorials) first.
+ We answer "HOW to do X in tensorpack" for a well-defined X. + We answer "HOW to do X with Tensorpack" for a well-defined X.
We also answer "HOW/WHY Tensorpack does X" for some X that Tensorpack or its examples are doing.
We don't answer general machine learning questions, We don't answer general machine learning questions,
such as "what networks to use" or "I don't understand the paper". such as "why my training doesn't converge", "what networks to use" or "I don't understand the paper".
You can also use gitter (https://gitter.im/tensorpack/users) for more casual discussions. You can also use gitter (https://gitter.im/tensorpack/users) for more casual discussions.
...@@ -26,10 +26,10 @@ matrix: ...@@ -26,10 +26,10 @@ matrix:
env: TF_VERSION=1.3.0 TF_TYPE=release env: TF_VERSION=1.3.0 TF_TYPE=release
- os: linux - os: linux
python: 2.7 python: 2.7
env: TF_VERSION=1.8.0 TF_TYPE=release env: TF_VERSION=1.9.0 TF_TYPE=release
- os: linux - os: linux
python: 3.5 python: 3.5
env: TF_VERSION=1.8.0 TF_TYPE=release PYPI=true env: TF_VERSION=1.9.0 TF_TYPE=release PYPI=true
- os: linux - os: linux
python: 2.7 python: 2.7
env: TF_TYPE=nightly env: TF_TYPE=nightly
......
...@@ -62,11 +62,10 @@ demonstrating its flexibility for actual research. ...@@ -62,11 +62,10 @@ demonstrating its flexibility for actual research.
Dependencies: Dependencies:
+ Python 2.7 or 3.3+ + Python 2.7 or 3.3+. Python 2.7 is supported until [it retires in 2020](https://pythonclock.org/).
+ Python bindings for OpenCV (Optional, but required by a lot of features) + Python bindings for OpenCV (Optional, but required by a lot of features)
+ TensorFlow >= 1.3.0 (Optional if you only want to use `tensorpack.dataflow` alone as a data processing library) + TensorFlow >= 1.3. (If you only want to use `tensorpack.dataflow` alone as a data processing library, TensorFlow is not needed)
``` ```
# install git, then:
pip install --upgrade git+https://github.com/tensorpack/tensorpack.git pip install --upgrade git+https://github.com/tensorpack/tensorpack.git
# or add `--user` to avoid system-wide installation. # or add `--user` to avoid system-wide installation.
``` ```
......
...@@ -72,7 +72,8 @@ def backbone_argscope(): ...@@ -72,7 +72,8 @@ def backbone_argscope():
def maybe_syncbn_scope(): def maybe_syncbn_scope():
if cfg.BACKBONE.NORM == 'SyncBN': if cfg.BACKBONE.NORM == 'SyncBN':
assert cfg.BACKBONE.FREEZE_AT == 2 # TODO add better support assert cfg.BACKBONE.FREEZE_AT == 2 # TODO add better support
with argscope(BatchNorm, training=None, sync_statistics='nccl'): with argscope(BatchNorm, training=None,
sync_statistics='nccl' if cfg.TRAINER == 'replicated' else 'horovod'):
yield yield
else: else:
yield yield
......
...@@ -28,7 +28,7 @@ class RunOp(Callback): ...@@ -28,7 +28,7 @@ class RunOp(Callback):
""" """
Args: Args:
op (tf.Operation or function): an Op, or a function that returns the Op in the graph. op (tf.Operation or function): an Op, or a function that returns the Op in the graph.
The function will be called later (in the `setup_graph` callback). The function will be called after the main graph has been created (in the `setup_graph` callback).
run_before (bool): run the Op before training run_before (bool): run the Op before training
run_as_trigger (bool): run the Op on every :meth:`trigger()` call. run_as_trigger (bool): run the Op on every :meth:`trigger()` call.
run_step (bool): run the Op every step (along with training) run_step (bool): run the Op every step (along with training)
......
...@@ -118,12 +118,12 @@ class OnlinePredictor(PredictorBase): ...@@ -118,12 +118,12 @@ class OnlinePredictor(PredictorBase):
else: else:
self._callable = None self._callable = None
def _do_call_old(self, dp): def _do_call(self, dp):
feed = dict(zip(self.input_tensors, dp)) assert len(dp) == len(self.input_tensors), \
output = self.sess.run(self.output_tensors, feed_dict=feed) "{} != {}".format(len(dp), len(self.input_tensors))
return output if self.sess is None:
self.sess = tf.get_default_session()
def _do_call_new(self, dp):
if self._callable is None: if self._callable is None:
self._callable = self.sess.make_callable( self._callable = self.sess.make_callable(
fetches=self.output_tensors, fetches=self.output_tensors,
...@@ -131,19 +131,7 @@ class OnlinePredictor(PredictorBase): ...@@ -131,19 +131,7 @@ class OnlinePredictor(PredictorBase):
accept_options=self.ACCEPT_OPTIONS) accept_options=self.ACCEPT_OPTIONS)
# run_metadata = tf.RunMetadata() # run_metadata = tf.RunMetadata()
# options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE) # options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
ret = self._callable(*dp) return self._callable(*dp)
return ret
def _do_call(self, dp):
assert len(dp) == len(self.input_tensors), \
"{} != {}".format(len(dp), len(self.input_tensors))
if self.sess is None:
self.sess = tf.get_default_session()
if self._use_callable:
return self._do_call_new(dp)
else:
return self._do_call_old(dp)
class OfflinePredictor(OnlinePredictor): class OfflinePredictor(OnlinePredictor):
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment