Commit 8bc909be authored by Yuxin Wu's avatar Yuxin Wu

small update in docs

parent 09995c03
...@@ -103,7 +103,7 @@ class Model(ModelDesc): ...@@ -103,7 +103,7 @@ class Model(ModelDesc):
boxes_on_featuremap = proposal_boxes * (1.0 / config.ANCHOR_STRIDE) boxes_on_featuremap = proposal_boxes * (1.0 / config.ANCHOR_STRIDE)
roi_resized = roi_align(featuremap, boxes_on_featuremap, 14) roi_resized = roi_align(featuremap, boxes_on_featuremap, 14)
feature_fastrcnn = resnet_conv5(roi_resized, config.RESNET_NUM_BLOCK[-1]) # nxc feature_fastrcnn = resnet_conv5(roi_resized, config.RESNET_NUM_BLOCK[-1]) # nxcx7x7
fastrcnn_label_logits, fastrcnn_box_logits = fastrcnn_head('fastrcnn', feature_fastrcnn, config.NUM_CLASS) fastrcnn_label_logits, fastrcnn_box_logits = fastrcnn_head('fastrcnn', feature_fastrcnn, config.NUM_CLASS)
if is_training: if is_training:
...@@ -133,8 +133,7 @@ class Model(ModelDesc): ...@@ -133,8 +133,7 @@ class Model(ModelDesc):
fastrcnn_label_loss, fastrcnn_box_loss, fastrcnn_label_loss, fastrcnn_box_loss,
wd_cost], 'total_cost') wd_cost], 'total_cost')
for k in self.cost, wd_cost: add_moving_summary(self.cost, wd_cost)
add_moving_summary(k)
else: else:
label_probs = tf.nn.softmax(fastrcnn_label_logits, name='fastrcnn_all_probs') # #proposal x #Class label_probs = tf.nn.softmax(fastrcnn_label_logits, name='fastrcnn_all_probs') # #proposal x #Class
anchors = tf.tile(tf.expand_dims(proposal_boxes, 1), [1, config.NUM_CLASS - 1, 1]) # #proposal x #Cat x 4 anchors = tf.tile(tf.expand_dims(proposal_boxes, 1), [1, config.NUM_CLASS - 1, 1]) # #proposal x #Cat x 4
......
# tensorpack examples # tensorpack examples
Training examples with __reproducible__ and meaningful performance. Training examples with __reproducible__ performance.
__Reproducible performance is important__. Usually deep learning code is easy to write,
but hard to know the correctness -- wrong code will usually still converge.
Without a setting and performance comparable to someone else, you don't know if an implementation is correct or not.
## Getting Started: ## Getting Started:
+ [An illustrative mnist example with explanation of the framework](mnist-convnet.py) + [An illustrative mnist example with explanation of the framework](mnist-convnet.py)
...@@ -13,7 +17,7 @@ Training examples with __reproducible__ and meaningful performance. ...@@ -13,7 +17,7 @@ Training examples with __reproducible__ and meaningful performance.
| Name | Performance | | Name | Performance |
| --- | --- | | --- | --- |
| Train [ResNet](ResNet) and [ShuffleNet](ShuffleNet) on ImageNet | reproduce paper | | Train [ResNet](ResNet) and [ShuffleNet](ShuffleNet) on ImageNet | reproduce paper |
| [Train ResNet50-Faster-RCNN on COCO](FasterRCNN) | reproduce paper | | [Train Faster-RCNN on COCO](FasterRCNN) | reproduce paper |
| [DoReFa-Net: training binary / low-bitwidth CNN on ImageNet](DoReFa-Net) | reproduce paper | | [DoReFa-Net: training binary / low-bitwidth CNN on ImageNet](DoReFa-Net) | reproduce paper |
| [Generative Adversarial Network(GAN) variants](GAN), including DCGAN, InfoGAN, <br/> Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN | visually reproduce | | [Generative Adversarial Network(GAN) variants](GAN), including DCGAN, InfoGAN, <br/> Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN | visually reproduce |
| [Inception-BN and InceptionV3](Inception) | reproduce reference code | | [Inception-BN and InceptionV3](Inception) | reproduce reference code |
...@@ -44,7 +48,3 @@ Training examples with __reproducible__ and meaningful performance. ...@@ -44,7 +48,3 @@ Training examples with __reproducible__ and meaningful performance.
Example needs to satisfy one of the following: Example needs to satisfy one of the following:
+ Reproduce performance of a published or well-known paper. + Reproduce performance of a published or well-known paper.
+ Illustrate a new way of using the library that is currently not covered. + Illustrate a new way of using the library that is currently not covered.
__Performance is important__. Usually deep learning code is easy to write,
but hard to know the correctness -- thanks to SGD things will usually still converge when you've made mistakes.
Without a setting and performance comparable to someone else, you don't know if an implementation is correct or not.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment