Commit 6745a4e4 authored by Yuxin Wu's avatar Yuxin Wu

update readme

parent 174a62c0
...@@ -6,6 +6,7 @@ Reproduce the HED paper by Saining. See [https://arxiv.org/abs/1504.06375](https ...@@ -6,6 +6,7 @@ Reproduce the HED paper by Saining. See [https://arxiv.org/abs/1504.06375](https
![HED](demo.jpg) ![HED](demo.jpg)
(Bottom-left: raw fused heatmap; Middle and right column: raw heatmaps at different stages) (Bottom-left: raw fused heatmap; Middle and right column: raw heatmaps at different stages)
HED is a fully-convolutional architecture. This code generally would also work HED is a fully-convolutional architecture. This code generally would also work
for other FCN tasks such as semantic segmentation and detection. for other FCN tasks such as semantic segmentation and detection.
...@@ -18,22 +19,22 @@ It requires pretrained vgg16 model. See the docs in [examples/load-vgg16.py](../ ...@@ -18,22 +19,22 @@ It requires pretrained vgg16 model. See the docs in [examples/load-vgg16.py](../
for instructions to convert from vgg16 caffe model. for instructions to convert from vgg16 caffe model.
To view augmented training images: To view augmented training images:
``` ```bash
./hed.py --view ./hed.py --view
``` ```
To start training: To start training:
``` ```bash
./hed.py --load vgg16.npy ./hed.py --load vgg16.npy
``` ```
To inference (produce a heatmap at each level at out*.png): To inference (produce a heatmap at each level at out*.png):
``` ```bash
./hed.py --load pretrained.model --run a.jpg ./hed.py --load pretrained.model --run a.jpg
``` ```
To view the loss curve: To view the loss curve:
``` ```bash
cat train_log/hed/stat.json | jq '.[] | cat train_log/hed/stat.json | jq '.[] |
[.xentropy1,.xentropy2,.xentropy3,.xentropy4,.xentropy5,.xentropy6] | [.xentropy1,.xentropy2,.xentropy3,.xentropy4,.xentropy5,.xentropy6] |
map(tostring) | join("\t") | .' -r | \ map(tostring) | join("\t") | .' -r | \
......
...@@ -16,7 +16,7 @@ from tensorpack.tfutils.summary import * ...@@ -16,7 +16,7 @@ from tensorpack.tfutils.summary import *
class Model(ModelDesc): class Model(ModelDesc):
def _get_input_vars(self): def _get_input_vars(self):
return [InputVar(tf.float32, [None, None, None] + [3], 'image'), return [InputVar(tf.float32, [None, None, None, 3], 'image'),
InputVar(tf.int32, [None, None, None], 'edgemap') ] InputVar(tf.int32, [None, None, None], 'edgemap') ]
def _build_graph(self, input_vars): def _build_graph(self, input_vars):
......
...@@ -12,3 +12,4 @@ Examples with __reproducible__ and meaningful performance. ...@@ -12,3 +12,4 @@ Examples with __reproducible__ and meaningful performance.
+ [Inception-BN with 71% accuracy](Inception/inception-bn.py) + [Inception-BN with 71% accuracy](Inception/inception-bn.py)
+ [InceptionV3 with 74.5% accuracy (similar to the official code)](Inception/inceptionv3.py) + [InceptionV3 with 74.5% accuracy (similar to the official code)](Inception/inceptionv3.py)
+ [ResNet for Cifar10 and SVHN](ResNet) + [ResNet for Cifar10 and SVHN](ResNet)
+ [Holistically-Nested Edge Detection](HED)
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment