Commit 97cce4a2 authored by Yuxin Wu's avatar Yuxin Wu

docs update

parent 943364bc
Bug Reports/Feature Requests/Usage Questions Only: Bug Reports/Feature Requests/Usage Questions Only:
Any unexpected problems: PLEASE always include Any unexpected problems: PLEASE always include
1. What you did. (command you run and changes you made if using examples; post or describe your code if not) 1. What you did:
+ Are you using any examples?
+ If yes, post the command you run and changes you made (if any).
+ If not, describe what you did that may be relevant.
But we may not be able to resolve it if there is no reproducible code.
2. What you observed, e.g. as much as logs possible. 2. What you observed, e.g. as much as logs possible.
3. What you expected, if not obvious. 3. What you expected, if not obvious.
4. Your environment (TF version, cudnn version, number & type of GPUs), if it matters. 4. Your environment (TF version, tensorpack version, cudnn version, number & type of GPUs), if it matters.
5. About efficiency, PLEASE first read http://tensorpack.readthedocs.io/en/latest/tutorial/performance-tuning.html 5. About efficiency, PLEASE first read http://tensorpack.readthedocs.io/en/latest/tutorial/performance-tuning.html
Feature Requests: Feature Requests:
...@@ -12,6 +16,6 @@ Feature Requests: ...@@ -12,6 +16,6 @@ Feature Requests:
2. Add a new feature. Please note that, you can implement a lot of features by extending tensorpack 2. Add a new feature. Please note that, you can implement a lot of features by extending tensorpack
(See http://tensorpack.readthedocs.io/en/latest/tutorial/index.html#extend-tensorpack). (See http://tensorpack.readthedocs.io/en/latest/tutorial/index.html#extend-tensorpack).
It may not have to be added to tensorpack unless you have a good reason. It may not have to be added to tensorpack unless you have a good reason.
3. Note that we don't implement papers at others' requests. 3. Note that we don't take example requests.
You can also use gitter (https://gitter.im/tensorpack/users) for more casual discussions. You can also use gitter (https://gitter.im/tensorpack/users) for more casual discussions.
...@@ -20,10 +20,10 @@ PLEASE do them and include your findings. ...@@ -20,10 +20,10 @@ PLEASE do them and include your findings.
## Benchmark the components ## Benchmark the components
1. Use `DummyConstantInput(shapes)` as the `InputSource`. 1. Use `DummyConstantInput(shapes)` as the `InputSource`.
so that the iterations doesn't take any data from Python side but train on a constant tensor. so that the iterations only take data from a constant tensor.
This will help find out the slow operations you're using in the graph. This will help find out the slow operations you're using in the graph.
2. Use `dataflow=FakeData(shapes, random=False)` to replace your original DataFlow by a constant DataFlow. 2. Use `dataflow=FakeData(shapes, random=False)` to replace your original DataFlow by a constant DataFlow.
This is almost the same as (1), i.e., it eliminates the overhead of data. This is almost the same as (1), i.e., it removes the overhead of data.
3. If you're using a TF-based input pipeline you wrote, you can simply run it in a loop and test its speed. 3. If you're using a TF-based input pipeline you wrote, you can simply run it in a loop and test its speed.
4. Use `TestDataSpeed(mydf).start()` to benchmark your DataFlow. 4. Use `TestDataSpeed(mydf).start()` to benchmark your DataFlow.
......
...@@ -51,16 +51,19 @@ Evaluate the performance of a model and save to json. ...@@ -51,16 +51,19 @@ Evaluate the performance of a model and save to json.
Models are trained on trainval35k and evaluated on minival using mAP@IoU=0.50:0.95. Models are trained on trainval35k and evaluated on minival using mAP@IoU=0.50:0.95.
MaskRCNN results contain both bbox and segm mAP. MaskRCNN results contain both bbox and segm mAP.
|Backbone | `FASTRCNN_BATCH` | resolution | mAP (bbox/segm) | Time | |Backbone|`FASTRCNN_BATCH`|resolution |schedule|mAP (bbox/segm)|Time |
| - | - | - | - | - | | - | - | - | - | - | - |
| R50 | 64 | (600, 1024) | 33.0 | 22h on 8 P100 | |R50 |64 |(600, 1024)|280k |33.0 |22h on 8 P100|
| R50 | 256 | (600, 1024) | 34.4 | 49h on 8 M40 | |R50 |512 |(800, 1333)|280k |35.6 |55h on 8 P100|
| R50 | 512 | (800, 1333) | 35.6 | 55h on 8 P100| |R50* |512 |(800, 1333)|360k |36.7 |49h on 8 V100|
| R50 | 256 | (800, 1333) | 36.9/32.3 | 39h on 8 P100| |R50 |256 |(800, 1333)|280k |36.9/32.3 |39h on 8 P100|
| R101 | 512 | (800, 1333) | 40.1/34.4 | 70h on 8 P100| |R101 |512 |(800, 1333)|280k |40.1/34.4 |70h on 8 P100|
Note that these models are trained with different ROI batch size and LR schedule. These models are trained with different configurations.
The performance is slightly better than the paper. The starred (*) models have identical configurations with
`R50-C4-2x` configuration in
[Detectron Model Zoo](https://github.com/facebookresearch/Detectron/blob/master/MODEL_ZOO.md#end-to-end-faster--mask-r-cnn-baselines0)
and get the same performance.
## Notes ## Notes
......
...@@ -38,7 +38,7 @@ class Augmentor(object): ...@@ -38,7 +38,7 @@ class Augmentor(object):
def augment_return_params(self, d): def augment_return_params(self, d):
""" """
Returns:
augmented data augmented data
augmentaion params augmentaion params
""" """
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment