Commit 822997c7 authored by Yuxin Wu's avatar Yuxin Wu

update docs and new model

parent 65bbd28a
...@@ -2,3 +2,6 @@ An issue has to be one of the following: ...@@ -2,3 +2,6 @@ An issue has to be one of the following:
- Unexpected Problems / Potential Bugs - Unexpected Problems / Potential Bugs
- Feature Requests - Feature Requests
- Questions on Using/Understanding Tensorpack - Questions on Using/Understanding Tensorpack
To post an issue, please click "New Issue", choose your category, and read
instructions there.
...@@ -13,6 +13,13 @@ A datapoint is a **list** of Python objects which are called the `components` of ...@@ -13,6 +13,13 @@ A datapoint is a **list** of Python objects which are called the `components` of
that yields datapoints (lists) of two components: that yields datapoints (lists) of two components:
a numpy array of shape (64, 28, 28), and an array of shape (64,). a numpy array of shape (64, 28, 28), and an array of shape (64,).
As you saw,
DataFlow is __independent__ of TensorFlow since it produces any python objects
(usually numpy arrays).
To `import tensorpack.dataflow`, you don't even have to install TensorFlow.
You can simply use DataFlow as a data processing pipeline and plug it into any other frameworks.
### Composition of DataFlow ### Composition of DataFlow
One good thing about having a standard interface is to be able to provide One good thing about having a standard interface is to be able to provide
the greatest code reusability. the greatest code reusability.
...@@ -65,8 +72,3 @@ generator = df.get_data() ...@@ -65,8 +72,3 @@ generator = df.get_data()
for dp in generator: for dp in generator:
# dp is now a list. do whatever # dp is now a list. do whatever
``` ```
DataFlow is __independent__ of both tensorpack and TensorFlow.
To `import tensorpack.dataflow`, you don't even have to install TensorFlow.
You can simply use it as a data processing pipeline and plug it into any other frameworks.
...@@ -72,27 +72,29 @@ prediction will need to be run with the corresponding training configs. ...@@ -72,27 +72,29 @@ prediction will need to be run with the corresponding training configs.
## Results ## Results
These models are trained with different configurations on trainval35k and evaluated on minival using mAP@IoU=0.50:0.95. These models are trained with different configurations on trainval35k and evaluated on minival using mAP@IoU=0.50:0.95.
Performance in [Detectron](https://github.com/facebookresearch/Detectron/) can be roughly reproduced, some are better but some are worse. Performance in [Detectron](https://github.com/facebookresearch/Detectron/) can be roughly reproduced.
MaskRCNN results contain both box and mask mAP. MaskRCNN results contain both box and mask mAP.
| Backbone | mAP<br/>(box;mask) | Detectron mAP <sup>[1](#ft1)</sup><br/> (box;mask) | Time on 8 V100s | Configurations <br/> (click to expand) | | Backbone | mAP<br/>(box;mask) | Detectron mAP <sup>[1](#ft1)</sup><br/> (box;mask) | Time on 8 V100s | Configurations <br/> (click to expand) |
| - | - | - | - | - | | - | - | - | - | - |
| R50-C4 | 33.1 | | 18h | <details><summary>super quick</summary>`MODE_MASK=False FRCNN.BATCH_PER_IM=64`<br/>`PREPROC.SHORT_EDGE_SIZE=600 PREPROC.MAX_SIZE=1024`<br/>`TRAIN.LR_SCHEDULE=[150000,230000,280000]` </details> | | R50-C4 | 33.1 | | 18h | <details><summary>super quick</summary>`MODE_MASK=False FRCNN.BATCH_PER_IM=64`<br/>`PREPROC.SHORT_EDGE_SIZE=600 PREPROC.MAX_SIZE=1024`<br/>`TRAIN.LR_SCHEDULE=[150000,230000,280000]` </details> |
| R50-C4 | 36.6 | 36.5 | 44h | <details><summary>standard</summary>`MODE_MASK=False` </details> | | R50-C4 | 36.6 | 36.5 | 44h | <details><summary>standard</summary>`MODE_MASK=False` </details> |
| R50-FPN | 37.4 | 37.9 | 29h | <details><summary>standard</summary>`MODE_MASK=False MODE_FPN=True` </details> | | R50-FPN | 37.4 | 37.9 | 29h | <details><summary>standard</summary>`MODE_MASK=False MODE_FPN=True` </details> |
| R50-C4 | 38.2;33.3 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R50C4-MaskRCNN-Standard.npz) | 37.8;32.8 | 49h | <details><summary>standard</summary>this is the default </details> | | R50-C4 | 38.2;33.3 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R50C4-MaskRCNN-Standard.npz) | 37.8;32.8 | 49h | <details><summary>standard</summary>this is the default </details> |
| R50-FPN | 38.5;35.2 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R50FPN-MaskRCNN-Standard.npz) | 38.6;34.5 | 30h | <details><summary>standard</summary>`MODE_FPN=True` </details> | | R50-FPN | 38.5;35.2 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R50FPN-MaskRCNN-Standard.npz) | 38.6;34.5 | 30h | <details><summary>standard</summary>`MODE_FPN=True` </details> |
| R50-FPN | 42.0;36.3 | | 41h | <details><summary>+Cascade</summary>`MODE_FPN=True FPN.CASCADE=True` </details> | | R50-FPN | 42.0;36.3 | | 41h | <details><summary>+Cascade</summary>`MODE_FPN=True FPN.CASCADE=True` </details> |
| R50-FPN | 39.5;35.2 | 39.5;34.4<sup>[2](#ft2)</sup> | 33h | <details><summary>+ConvGNHead</summary>`MODE_FPN=True`<br/>`FPN.FRCNN_HEAD_FUNC=fastrcnn_4conv1fc_gn_head` </details> | | R50-FPN | 39.5;35.2 | 39.5;34.4<sup>[2](#ft2)</sup> | 33h | <details><summary>+ConvGNHead</summary>`MODE_FPN=True`<br/>`FPN.FRCNN_HEAD_FUNC=fastrcnn_4conv1fc_gn_head` </details> |
| R50-FPN | 40.0;36.2 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R50FPN-MaskRCNN-StandardGN.npz) | 40.3;35.7 | 40h | <details><summary>+GN</summary>`MODE_FPN=True`<br/>`FPN.NORM=GN BACKBONE.NORM=GN`<br/>`FPN.FRCNN_HEAD_FUNC=fastrcnn_4conv1fc_gn_head`<br/>`FPN.MRCNN_HEAD_FUNC=maskrcnn_up4conv_gn_head` | | R50-FPN | 40.0;36.2 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R50FPN-MaskRCNN-StandardGN.npz) | 40.3;35.7 | 40h | <details><summary>+GN</summary>`MODE_FPN=True`<br/>`FPN.NORM=GN BACKBONE.NORM=GN`<br/>`FPN.FRCNN_HEAD_FUNC=fastrcnn_4conv1fc_gn_head`<br/>`FPN.MRCNN_HEAD_FUNC=maskrcnn_up4conv_gn_head` |
| R101-C4 | 41.4;35.2 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R101C4-MaskRCNN-Standard.npz) | | 60h | <details><summary>standard</summary>`BACKBONE.RESNET_NUM_BLOCK=[3,4,23,3]` </details> | | R101-C4 | 41.4;35.2 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R101C4-MaskRCNN-Standard.npz) | | 60h | <details><summary>standard</summary>`BACKBONE.RESNET_NUM_BLOCK=[3,4,23,3]` </details> |
| R101-FPN | 40.4;36.6 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R101FPN-MaskRCNN-Standard.npz) | 40.9;36.4 | 38h | <details><summary>standard</summary>`MODE_FPN=True`<br/>`BACKBONE.RESNET_NUM_BLOCK=[3,4,23,3]` </details> | | R101-FPN | 40.4;36.6 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R101FPN-MaskRCNN-Standard.npz) | 40.9;36.4 | 38h | <details><summary>standard</summary>`MODE_FPN=True`<br/>`BACKBONE.RESNET_NUM_BLOCK=[3,4,23,3]` </details> |
| R101-FPN | 41.1;36.6 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R101FPN-MaskRCNN-BetterParams.npz) | 40.9;36.4 | 38h | <details><summary>better params</summary>`MODE_FPN=True`<br/>`BACKBONE.RESNET_NUM_BLOCK=[3,4,23,3]`<br/>`TEST.RESULT_SCORE_THRESH=1e-4`<br/>`FRCNN.BBOX_REG_WEIGHTS=[20,20,10,10]` </details> | | R101-FPN | 46.5;40.1 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R101FPN-MaskRCNN-BetterParams.npz) <sup>[3](#ft3)</sup> | | 73h | <details><summary>+++</summary>`MODE_FPN=True FPN.CASCADE=True`<br/>`BACKBONE.RESNET_NUM_BLOCK=[3,4,23,3]`<br/>`TEST.RESULT_SCORE_THRESH=1e-4`<br/>`PREPROC.TRAIN_SHORT_EDGE_SIZE=[640,800]`<br/>`TRAIN.LR_SCHEDULE=[420000,500000,540000]` </details> |
<a id="ft1">1</a>: Here we comapre models that have identical training & inference cost between the two implementation. However their numbers are different due to many small implementation details. <a id="ft1">1</a>: Here we comapre models that have identical training & inference cost between the two implementation. However their numbers are different due to many small implementation details.
<a id="ft2">2</a>: Numbers taken from [Group Normalization](https://arxiv.org/abs/1803.08494) <a id="ft2">2</a>: Numbers taken from [Group Normalization](https://arxiv.org/abs/1803.08494)
<a id="ft3">3</a>: Our mAP is __10+ point__ better than the official model in [matterport/Mask_RCNN](https://github.com/matterport/Mask_RCNN/releases) with the same R101-FPN backbone.
## Notes ## Notes
[NOTES.md](NOTES.md) has some notes about implementation details & speed. [NOTES.md](NOTES.md) has some notes about implementation details & speed.
...@@ -441,7 +441,7 @@ class EvalCallback(Callback): ...@@ -441,7 +441,7 @@ class EvalCallback(Callback):
logger.info("[EvalCallback] Will evaluate every {} epochs".format(interval)) logger.info("[EvalCallback] Will evaluate every {} epochs".format(interval))
def _eval(self): def _eval(self):
if cfg.TRAINER == 'replicated' or cfg.TRAIN.NUM_GPUS == 1: if cfg.TRAINER == 'replicated':
with ThreadPoolExecutor(max_workers=self.num_predictor, thread_name_prefix='EvalWorker') as executor, \ with ThreadPoolExecutor(max_workers=self.num_predictor, thread_name_prefix='EvalWorker') as executor, \
tqdm.tqdm(total=sum([df.size() for df in self.dataflows])) as pbar: tqdm.tqdm(total=sum([df.size() for df in self.dataflows])) as pbar:
futures = [] futures = []
......
...@@ -86,11 +86,13 @@ class PeriodicRunHooks(ProxyCallback): ...@@ -86,11 +86,13 @@ class PeriodicRunHooks(ProxyCallback):
class EnableCallbackIf(ProxyCallback): class EnableCallbackIf(ProxyCallback):
""" """
Enable the ``{before,after}_epoch``, ``{before,after}_run``, Disable the ``{before,after}_epoch``, ``{before,after}_run``,
``trigger_{epoch,step}`` ``trigger_{epoch,step}``
methods of a callback, only when some condition satisfies. methods of a callback, unless some condition satisfies.
The other methods are unaffected. The other methods are unaffected.
A more accurate name for this callback should be "DisableCallbackUnless", but that's too ugly.
Note: Note:
If you use ``{before,after}_run``, If you use ``{before,after}_run``,
``pred`` will be evaluated only in ``before_run``. ``pred`` will be evaluated only in ``before_run``.
...@@ -101,6 +103,7 @@ class EnableCallbackIf(ProxyCallback): ...@@ -101,6 +103,7 @@ class EnableCallbackIf(ProxyCallback):
Args: Args:
callback (Callback): callback (Callback):
pred (self -> bool): a callable predicate. Has to be a pure function. pred (self -> bool): a callable predicate. Has to be a pure function.
The callback is disabled unless this predicate returns True.
""" """
self._pred = pred self._pred = pred
super(EnableCallbackIf, self).__init__(callback) super(EnableCallbackIf, self).__init__(callback)
......
#!/usr/bin/env python
import os import os
from .serialize import loads_msgpack, loads_pyarrow, dumps_msgpack, dumps_pyarrow from .serialize import loads_msgpack, loads_pyarrow, dumps_msgpack, dumps_pyarrow
......
...@@ -2,8 +2,6 @@ ...@@ -2,8 +2,6 @@
# File: serialize.py # File: serialize.py
import os import os
import pyarrow as pa
from .develop import create_dummy_func from .develop import create_dummy_func
__all__ = ['loads', 'dumps'] __all__ = ['loads', 'dumps']
...@@ -46,6 +44,16 @@ def loads_pyarrow(buf): ...@@ -46,6 +44,16 @@ def loads_pyarrow(buf):
return pa.deserialize(buf) return pa.deserialize(buf)
try:
# import pyarrow has a lot of side effect: https://github.com/apache/arrow/pull/2329
# So we need an option to disable it.
if os.environ.get('TENSORPACK_SERIALIZE', 'pyarrow') == 'pyarrow':
import pyarrow as pa
except ImportError:
pa = None
dumps_pyarrow = create_dummy_func('dumps_pyarrow', ['pyarrow']) # noqa
loads_pyarrow = create_dummy_func('loads_pyarrow', ['pyarrow']) # noqa
try: try:
import msgpack import msgpack
import msgpack_numpy import msgpack_numpy
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment