Commit 26321ae5 authored by Yuxin Wu's avatar Yuxin Wu

update travis

parent d3f11e3f
...@@ -22,28 +22,29 @@ matrix: ...@@ -22,28 +22,29 @@ matrix:
python: 2.7 python: 2.7
env: TF_VERSION=1.3.0 TF_TYPE=release env: TF_VERSION=1.3.0 TF_TYPE=release
- os: linux - os: linux
python: 3.5 python: 3.6
env: TF_VERSION=1.3.0 TF_TYPE=release env: TF_VERSION=1.3.0 TF_TYPE=release
- os: linux - os: linux
python: 2.7 python: 2.7
env: TF_VERSION=1.9.0 TF_TYPE=release env: TF_VERSION=1.10.0 TF_TYPE=release
- os: linux - os: linux
python: 3.5 python: 3.6
env: TF_VERSION=1.9.0 TF_TYPE=release PYPI=true env: TF_VERSION=1.10.0 TF_TYPE=release PYPI=true
- os: linux - os: linux
python: 2.7 python: 2.7
env: TF_TYPE=nightly env: TF_TYPE=nightly
- os: linux - os: linux
python: 3.5 python: 3.6
env: TF_TYPE=nightly env: TF_TYPE=nightly
allow_failures: allow_failures:
- env: TF_TYPE=nightly - env: TF_TYPE=nightly
install: install:
- pip install -U pip # the pip version on travis is too old - pip install -U pip # the pip version on travis is too old
- pip install flake8 scikit-image opencv-python lmdb h5py pyarrow msgpack - pip install flake8 scikit-image opencv-python lmdb h5py msgpack
- pip install . - pip install .
# check that dataflow can be imported alone # check that dataflow can be imported alone
- python -c "import pyarrow"
- python -c "import tensorpack.dataflow" - python -c "import tensorpack.dataflow"
- ./tests/install-tensorflow.sh - ./tests/install-tensorflow.sh
...@@ -58,7 +59,7 @@ before_script: ...@@ -58,7 +59,7 @@ before_script:
script: script:
- flake8 . - flake8 .
- if [[ $TRAVIS_PYTHON_VERSION == '3.5' ]]; then cd examples && flake8 .; fi # some examples are py3 only - if [[ $TRAVIS_PYTHON_VERSION == '3.6' ]]; then cd examples && flake8 .; fi # some examples are py3 only
- mkdir -p $HOME/tensorpack_data - mkdir -p $HOME/tensorpack_data
- export TENSORPACK_DATASET=$HOME/tensorpack_data - export TENSORPACK_DATASET=$HOME/tensorpack_data
- $TRAVIS_BUILD_DIR/tests/run-tests.sh - $TRAVIS_BUILD_DIR/tests/run-tests.sh
...@@ -90,7 +91,7 @@ deploy: ...@@ -90,7 +91,7 @@ deploy:
tags: true tags: true
branch: master branch: master
repo: tensorpack/tensorpack repo: tensorpack/tensorpack
python: "3.5" python: "3.6"
condition: "$PYPI = true" condition: "$PYPI = true"
- provider: pypi - provider: pypi
...@@ -103,5 +104,5 @@ deploy: ...@@ -103,5 +104,5 @@ deploy:
on: on:
branch: test-travis branch: test-travis
repo: tensorpack/tensorpack repo: tensorpack/tensorpack
python: "3.5" python: "3.6"
condition: "$PYPI = true" condition: "$PYPI = true"
...@@ -73,15 +73,15 @@ MaskRCNN results contain both box and mask mAP. ...@@ -73,15 +73,15 @@ MaskRCNN results contain both box and mask mAP.
| Backbone | mAP<br/>(box;mask) | Detectron mAP <sup>[1](#ft1)</sup><br/> (box;mask) | Time on 8 V100s | Configurations <br/> (click to expand) | | Backbone | mAP<br/>(box;mask) | Detectron mAP <sup>[1](#ft1)</sup><br/> (box;mask) | Time on 8 V100s | Configurations <br/> (click to expand) |
| - | - | - | - | - | | - | - | - | - | - |
| R50-C4 | 33.8 | | 18h | <details><summary>super quick</summary>`MODE_MASK=False FRCNN.BATCH_PER_IM=64`<br/>`PREPROC.SHORT_EDGE_SIZE=600 PREPROC.MAX_SIZE=1024`<br/>`TRAIN.LR_SCHEDULE=[150000,230000,280000]` </details> | | R50-C4 | 33.1 | | 18h | <details><summary>super quick</summary>`MODE_MASK=False FRCNN.BATCH_PER_IM=64`<br/>`PREPROC.SHORT_EDGE_SIZE=600 PREPROC.MAX_SIZE=1024`<br/>`TRAIN.LR_SCHEDULE=[150000,230000,280000]` </details> |
| R50-C4 | 37.1 | 36.5 | 44h | <details><summary>standard</summary>`MODE_MASK=False` </details> | | R50-C4 | 36.6 | 36.5 | 44h | <details><summary>standard</summary>`MODE_MASK=False` </details> |
| R50-FPN | 37.5 | 37.9 | 30h | <details><summary>standard</summary>`MODE_MASK=False MODE_FPN=True` </details> | | R50-FPN | 37.4 | 37.9 | 30h | <details><summary>standard</summary>`MODE_MASK=False MODE_FPN=True` </details> |
| R50-C4 | 38.5;33.7 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R50C4-MaskRCNN-Standard.npz) | 37.8;32.8 | 49h | <details><summary>standard</summary>`MODE_MASK=True` </details> | | R50-C4 | 37.8;33.1 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R50C4-MaskRCNN-Standard.npz) | 37.8;32.8 | 49h | <details><summary>standard</summary>`MODE_MASK=True` </details> |
| R50-FPN | 38.8;35.4 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R50FPN-MaskRCNN-Standard.npz) | 38.6;34.5 | 32h | <details><summary>standard</summary>`MODE_MASK=True MODE_FPN=True` </details> | | R50-FPN | 38.2;34.9 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R50FPN-MaskRCNN-Standard.npz) | 38.6;34.5 | 32h | <details><summary>standard</summary>`MODE_MASK=True MODE_FPN=True` </details> |
| R50-FPN | 39.8;35.5 | 39.5;34.4<sup>[2](#ft2)</sup> | 34h | <details><summary>standard+ConvGNHead</summary>`MODE_MASK=True MODE_FPN=True`<br/>`FPN.FRCNN_HEAD_FUNC=fastrcnn_4conv1fc_gn_head` </details> | | R50-FPN | 39.5;35.2 | 39.5;34.4<sup>[2](#ft2)</sup> | 34h | <details><summary>standard+ConvGNHead</summary>`MODE_MASK=True MODE_FPN=True`<br/>`FPN.FRCNN_HEAD_FUNC=fastrcnn_4conv1fc_gn_head` </details> |
| R50-FPN | 40.3;36.4 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R50FPN-MaskRCNN-StandardGN.npz) | 40.3;35.7 | 44h | <details><summary>standard+GN</summary>`MODE_MASK=True MODE_FPN=True`<br/>`FPN.NORM=GN BACKBONE.NORM=GN`<br/>`FPN.FRCNN_HEAD_FUNC=fastrcnn_4conv1fc_gn_head`<br/>`FPN.MRCNN_HEAD_FUNC=maskrcnn_up4conv_gn_head` | | R50-FPN | 40.0;36.1 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R50FPN-MaskRCNN-StandardGN.npz) | 40.3;35.7 | 44h | <details><summary>standard+GN</summary>`MODE_MASK=True MODE_FPN=True`<br/>`FPN.NORM=GN BACKBONE.NORM=GN`<br/>`FPN.FRCNN_HEAD_FUNC=fastrcnn_4conv1fc_gn_head`<br/>`FPN.MRCNN_HEAD_FUNC=maskrcnn_up4conv_gn_head` |
| R101-C4 | 41.7;35.5 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R101C4-MaskRCNN-Standard.npz) | | 63h | <details><summary>standard</summary>`MODE_MASK=True `<br/>`BACKBONE.RESNET_NUM_BLOCK=[3,4,23,3]` </details> | | R101-C4 | 40.8;35.1 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R101C4-MaskRCNN-Standard.npz) | | 63h | <details><summary>standard</summary>`MODE_MASK=True `<br/>`BACKBONE.RESNET_NUM_BLOCK=[3,4,23,3]` </details> |
| R101-FPN | 40.7;36.9 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R101FPN-MaskRCNN-Standard.npz) | 40.9;36.4 | 40h | <details><summary>standard</summary>`MODE_MASK=True MODE_FPN=True`<br/>`BACKBONE.RESNET_NUM_BLOCK=[3,4,23,3]` </details> | | R101-FPN | 40.4;36.6 [:arrow_down:](http://models.tensorpack.com/FasterRCNN/COCO-R101FPN-MaskRCNN-Standard.npz) | 40.9;36.4 | 40h | <details><summary>standard</summary>`MODE_MASK=True MODE_FPN=True`<br/>`BACKBONE.RESNET_NUM_BLOCK=[3,4,23,3]` </details> |
<a id="ft1">1</a>: Here we comapre models that have identical training & inference cost between the two implementation. However their numbers are different due to many small implementation details. <a id="ft1">1</a>: Here we comapre models that have identical training & inference cost between the two implementation. However their numbers are different due to many small implementation details.
......
...@@ -157,15 +157,16 @@ _C.FPN.FRCNN_HEAD_FUNC = 'fastrcnn_2fc_head' ...@@ -157,15 +157,16 @@ _C.FPN.FRCNN_HEAD_FUNC = 'fastrcnn_2fc_head'
# choices: fastrcnn_2fc_head, fastrcnn_4conv1fc_{,gn_}head # choices: fastrcnn_2fc_head, fastrcnn_4conv1fc_{,gn_}head
_C.FPN.FRCNN_CONV_HEAD_DIM = 256 _C.FPN.FRCNN_CONV_HEAD_DIM = 256
_C.FPN.FRCNN_FC_HEAD_DIM = 1024 _C.FPN.FRCNN_FC_HEAD_DIM = 1024
_C.FPN.MRCNN_HEAD_FUNC = 'maskrcnn_up4conv_head' _C.FPN.MRCNN_HEAD_FUNC = 'maskrcnn_up4conv_head' # choices: maskrcnn_up4conv_{,gn_}head
# choices: maskrcnn_up4conv_{,gn_}head
# Mask-RCNN # Mask-RCNN
_C.MRCNN.HEAD_DIM = 256 _C.MRCNN.HEAD_DIM = 256
# testing ----------------------- # testing -----------------------
_C.TEST.FRCNN_NMS_THRESH = 0.5 _C.TEST.FRCNN_NMS_THRESH = 0.5
_C.TEST.RESULT_SCORE_THRESH = 1e-4
# Smaller threshold value gives significantly better mAP. But we use 0.05 for consistency with Detectron.
_C.TEST.RESULT_SCORE_THRESH = 0.05
_C.TEST.RESULT_SCORE_THRESH_VIS = 0.3 # only visualize confident results _C.TEST.RESULT_SCORE_THRESH_VIS = 0.3 # only visualize confident results
_C.TEST.RESULTS_PER_IM = 100 _C.TEST.RESULTS_PER_IM = 100
......
...@@ -21,7 +21,7 @@ from ..tfutils.tower import get_current_tower_context ...@@ -21,7 +21,7 @@ from ..tfutils.tower import get_current_tower_context
from ..tfutils.dependency import dependency_of_fetches from ..tfutils.dependency import dependency_of_fetches
from ..utils import logger from ..utils import logger
from ..utils.concurrency import ShareSessionThread from ..utils.concurrency import ShareSessionThread
from ..utils.develop import log_deprecated, deprecated from ..utils.develop import deprecated
from ..callbacks.base import Callback, CallbackFactory from ..callbacks.base import Callback, CallbackFactory
from ..callbacks.graph import RunOp from ..callbacks.graph import RunOp
...@@ -520,7 +520,7 @@ class StagingInput(FeedfreeInput): ...@@ -520,7 +520,7 @@ class StagingInput(FeedfreeInput):
logger.info("Pre-filling StagingArea ...") logger.info("Pre-filling StagingArea ...")
for k in range(self.nr_stage): for k in range(self.nr_stage):
self.stage_op.run() self.stage_op.run()
logger.info("{} element{} put into StagingArea.".format( logger.info("{} element{} put into StagingArea on each tower.".format(
self.nr_stage, "s were" if self.nr_stage > 1 else " was")) self.nr_stage, "s were" if self.nr_stage > 1 else " was"))
def _before_run(self, ctx): def _before_run(self, ctx):
...@@ -534,21 +534,18 @@ class StagingInput(FeedfreeInput): ...@@ -534,21 +534,18 @@ class StagingInput(FeedfreeInput):
if dependency_of_fetches(fetches, self._check_dependency_op): if dependency_of_fetches(fetches, self._check_dependency_op):
return self.fetches return self.fetches
def __init__(self, input, towers=None, nr_stage=1, device=None): def __init__(self, input, nr_stage=1, device=None):
""" """
Args: Args:
input (FeedfreeInput): input (FeedfreeInput):
nr_stage: number of elements to prefetch into each StagingArea, at the beginning. nr_stage: number of elements to prefetch into each StagingArea, at the beginning.
Since enqueue and dequeue are synchronized, prefetching 1 element should be sufficient. Since enqueue and dequeue are synchronized, prefetching 1 element should be sufficient.
towers: deprecated
device (str or None): if not None, place the StagingArea on a specific device. e.g., '/cpu:0'. device (str or None): if not None, place the StagingArea on a specific device. e.g., '/cpu:0'.
Otherwise, they are placed under where `get_inputs_tensors` Otherwise, they are placed under where `get_inputs_tensors`
gets called, which could be unspecified in case of simple trainers. gets called, which could be unspecified in case of simple trainers.
""" """
assert isinstance(input, FeedfreeInput), input assert isinstance(input, FeedfreeInput), input
self._input = input self._input = input
if towers is not None:
log_deprecated("StagingInput(towers=)", "Devices are handled automatically.", "2018-03-31")
self._nr_stage = nr_stage self._nr_stage = nr_stage
self._areas = [] self._areas = []
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment