Commit d62836b0 authored by Yuxin Wu's avatar Yuxin Wu

update docs

parent eb408ed0
...@@ -24,7 +24,8 @@ Some typical questions that we DO NOT answer: ...@@ -24,7 +24,8 @@ Some typical questions that we DO NOT answer:
Tensorpack maintainers make sure the examples perform well without modification. Tensorpack maintainers make sure the examples perform well without modification.
But it's your job to pick the model and parameters that are suitable for your own situation. But it's your job to pick the model and parameters that are suitable for your own situation.
We do not help with such questions unless they appear to be a bug in tensorpack. We do not help with such questions unless they appear to be a bug in tensorpack.
+ "Why my model doesn't work?", "I don't understand this paper you implement." + "Why my model doesn't work?", "I don't understand this paper you implement.",
"How should I change the examples for my own dataset?"
We do not answer machine learning questions. We do not answer machine learning questions.
......
...@@ -12,6 +12,18 @@ about: More general questions about Tensorpack. ...@@ -12,6 +12,18 @@ about: More general questions about Tensorpack.
+ We answer "HOW to do X with Tensorpack" for a well-defined X. + We answer "HOW to do X with Tensorpack" for a well-defined X.
We also answer "HOW/WHY Tensorpack does X" for some X that Tensorpack or its examples are doing. We also answer "HOW/WHY Tensorpack does X" for some X that Tensorpack or its examples are doing.
We __don't__ answer general machine learning questions, such as "why my training doesn't converge", "what networks to use" or "I don't understand the paper". Some typical questions that we DO NOT answer:
+ "Could you improve/implement an example/paper ?" --
We have no plans to do so. We don't consider feature
requests for examples or implement a paper for you.
If you don't know how to do something yourself, you may ask a usage question.
+ "The examples do not perform well after I change the models/dataset/parameters/etc."
Tensorpack maintainers make sure the examples perform well without modification.
But it's your job to pick the model and parameters that are suitable for your own situation.
We do not help with such questions unless they appear to be a bug in tensorpack.
+ "Why my model doesn't work?", "I don't understand this paper you implement.",
"How should I change the examples for my own dataset?"
We do not answer machine learning questions.
You can also use gitter (https://gitter.im/tensorpack/users) for more casual discussions. You can also use gitter (https://gitter.im/tensorpack/users) for more casual discussions.
...@@ -7,6 +7,9 @@ such as conv/deconv, fc, bn, pooling layers. **You do not need to learn them.** ...@@ -7,6 +7,9 @@ such as conv/deconv, fc, bn, pooling layers. **You do not need to learn them.**
These layers were written only because there were no alternatives when These layers were written only because there were no alternatives when
tensorpack was first developed. tensorpack was first developed.
Nowadays, these implementation actually call `tf.layers` directly. Nowadays, these implementation actually call `tf.layers` directly.
Tensorpack will not add any more layers into its core library because this is
not the focus of tensorpack, and there are many other alternative symbolic
libraries today.
Today, you can just use `tf.layers` or any other symbolic libraries inside tensorpack. Today, you can just use `tf.layers` or any other symbolic libraries inside tensorpack.
Using the tensorpack implementations, you can also benefit from `argscope` and `LinearWrap` to Using the tensorpack implementations, you can also benefit from `argscope` and `LinearWrap` to
......
...@@ -47,7 +47,7 @@ On a single machine: ...@@ -47,7 +47,7 @@ On a single machine:
./train.py --config \ ./train.py --config \
MODE_MASK=True MODE_FPN=True \ MODE_MASK=True MODE_FPN=True \
DATA.BASEDIR=/path/to/COCO/DIR \ DATA.BASEDIR=/path/to/COCO/DIR \
BACKBONE.WEIGHTS=/path/to/ImageNet-R50-Pad.npz \ BACKBONE.WEIGHTS=/path/to/ImageNet-R50-AlignPadding.npz \
``` ```
To run distributed training, set `TRAINER=horovod` and refer to [HorovodTrainer docs](http://tensorpack.readthedocs.io/modules/train.html#tensorpack.train.HorovodTrainer). To run distributed training, set `TRAINER=horovod` and refer to [HorovodTrainer docs](http://tensorpack.readthedocs.io/modules/train.html#tensorpack.train.HorovodTrainer).
...@@ -77,6 +77,7 @@ prediction will need to be run with the corresponding training configs. ...@@ -77,6 +77,7 @@ prediction will need to be run with the corresponding training configs.
## Results ## Results
These models are trained on trainval35k and evaluated on minival2014 using mAP@IoU=0.50:0.95. These models are trained on trainval35k and evaluated on minival2014 using mAP@IoU=0.50:0.95.
All models are fine-tuned from ImageNet pre-trained R50/R101 models in the [model zoo](http://models.tensorpack.com/FasterRCNN/).
Performance in [Detectron](https://github.com/facebookresearch/Detectron/) can be roughly reproduced. Performance in [Detectron](https://github.com/facebookresearch/Detectron/) can be roughly reproduced.
Mask R-CNN results contain both box and mask mAP. Mask R-CNN results contain both box and mask mAP.
......
[flake8] [flake8]
max-line-length = 120 max-line-length = 120
ignore = F403,F405,E402,E741,E742,E743 ignore = F403,F405,E402,E741,E742,E743,W504,W605
exclude = private, exclude = private,
FasterRCNN/utils FasterRCNN/utils
...@@ -565,6 +565,9 @@ class LocallyShuffleData(ProxyDataFlow, RNGDataFlow): ...@@ -565,6 +565,9 @@ class LocallyShuffleData(ProxyDataFlow, RNGDataFlow):
""" Maintain a pool to buffer datapoints, and shuffle before producing them. """ Maintain a pool to buffer datapoints, and shuffle before producing them.
This can be used as an alternative when a complete random read is too expensive This can be used as an alternative when a complete random read is too expensive
or impossible for the data source. or impossible for the data source.
To maintain shuffling states, this dataflow is not reentrant.
The iterator will run indefinitely because after mixing the datapoints, it does not make sense to stop anywhere.
""" """
def __init__(self, ds, buffer_size, nr_reuse=1, shuffle_interval=None): def __init__(self, ds, buffer_size, nr_reuse=1, shuffle_interval=None):
...@@ -585,27 +588,28 @@ class LocallyShuffleData(ProxyDataFlow, RNGDataFlow): ...@@ -585,27 +588,28 @@ class LocallyShuffleData(ProxyDataFlow, RNGDataFlow):
shuffle_interval = int(buffer_size // 3) shuffle_interval = int(buffer_size // 3)
self.shuffle_interval = shuffle_interval self.shuffle_interval = shuffle_interval
self.nr_reuse = nr_reuse self.nr_reuse = nr_reuse
self._inf_ds = RepeatedData(ds, -1)
self._guard = DataFlowReentrantGuard() self._guard = DataFlowReentrantGuard()
def reset_state(self): def reset_state(self):
ProxyDataFlow.reset_state(self) ProxyDataFlow.reset_state(self)
RNGDataFlow.reset_state(self) RNGDataFlow.reset_state(self)
self.current_cnt = 0 self._iter_cnt = 0
self._inf_iter = iter(self._inf_ds)
def __len__(self): def __len__(self):
return len(self.ds) * self.nr_reuse return len(self.ds) * self.nr_reuse
def __iter__(self): def __iter__(self):
with self._guard: with self._guard:
for i, dp in enumerate(self.ds): for dp in self._inf_iter:
self._iter_cnt = (self._iter_cnt + 1) % self.shuffle_interval
# fill queue # fill queue
if i % self.shuffle_interval == 0: if self._iter_cnt % self.shuffle_interval == 0:
self.rng.shuffle(self.q) self.rng.shuffle(self.q)
if self.q.maxlen > len(self.q):
self.q.extend([dp] * self.nr_reuse)
continue
for _ in range(self.nr_reuse): for _ in range(self.nr_reuse):
yield self.q.popleft() if self.q.maxlen == len(self.q):
yield self.q.popleft()
self.q.append(dp) self.q.append(dp)
......
...@@ -127,7 +127,7 @@ def psnr(prediction, ground_truth, maxp=None, name='psnr'): ...@@ -127,7 +127,7 @@ def psnr(prediction, ground_truth, maxp=None, name='psnr'):
maxp: maximum possible pixel value of the image (255 in in 8bit images) maxp: maximum possible pixel value of the image (255 in in 8bit images)
Returns: Returns:
A scalar tensor representing the PSNR. A scalar tensor representing the PSNR
""" """
maxp = float(maxp) maxp = float(maxp)
......
...@@ -50,10 +50,10 @@ try: ...@@ -50,10 +50,10 @@ try:
# import pyarrow has a lot of side effect: https://github.com/apache/arrow/pull/2329 # import pyarrow has a lot of side effect: https://github.com/apache/arrow/pull/2329
# So we need an option to disable it. # So we need an option to disable it.
if os.environ.get('TENSORPACK_SERIALIZE', 'pyarrow') == 'pyarrow': if os.environ.get('TENSORPACK_SERIALIZE', 'pyarrow') == 'pyarrow':
import pyarrow as pa
if 'horovod' in sys.modules: if 'horovod' in sys.modules:
logger.warn("Horovod and pyarrow may conflict due to pyarrow bugs. " logger.warn("Horovod and pyarrow may conflict due to pyarrow bugs. "
"Uninstall pyarrow and use msgpack instead.") "Uninstall pyarrow and use msgpack instead.")
import pyarrow as pa
else: else:
pa = None pa = None
except ImportError: except ImportError:
......
[flake8] [flake8]
max-line-length = 120 max-line-length = 120
# See https://pep8.readthedocs.io/en/latest/intro.html#error-codes # See https://pep8.readthedocs.io/en/latest/intro.html#error-codes
ignore = E265,E741,E742,E743 ignore = E265,E741,E742,E743,W504,W605
exclude = .git, exclude = .git,
__init__.py, __init__.py,
setup.py, setup.py,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment