Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
d6c2d6b3
Commit
d6c2d6b3
authored
May 30, 2017
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
add CAM
parent
38d26977
Changes
5
Expand all
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
294 additions
and
5 deletions
+294
-5
README.md
README.md
+2
-2
examples/README.md
examples/README.md
+2
-2
examples/Saliency/CAM-demo.jpg
examples/Saliency/CAM-demo.jpg
+0
-0
examples/Saliency/CAM-resnet.py
examples/Saliency/CAM-resnet.py
+265
-0
examples/Saliency/README.md
examples/Saliency/README.md
+25
-1
No files found.
README.md
View file @
d6c2d6b3
...
...
@@ -12,8 +12,8 @@ See some [examples](examples) to learn about the framework:
+
[
Generative Adversarial Network(GAN) variants
](
examples/GAN
)
, including DCGAN, InfoGAN, Conditional GAN, WGAN, BEGAN, DiscoGAN, Image to Image, CycleGAN.
+
[
Fully-convolutional Network for Holistically-Nested Edge Detection(HED)
](
examples/HED
)
+
[
Spatial Transformer Networks on MNIST addition
](
examples/SpatialTransformer
)
+
[
Visualize
Saliency Maps by Guided ReLU
](
examples/Saliency
)
+
[
Similarity
L
earning on MNIST
](
examples/SimilarityLearning
)
+
[
Visualize
CNN saliency maps
](
examples/Saliency
)
+
[
Similarity
l
earning on MNIST
](
examples/SimilarityLearning
)
### Reinforcement Learning:
+
[
Deep Q-Network(DQN) variants on Atari games
](
examples/DeepQNetwork
)
, including DQN, DoubleDQN, DuelingDQN.
...
...
examples/README.md
View file @
d6c2d6b3
...
...
@@ -17,8 +17,8 @@ Training examples with __reproducible__ and meaningful performance.
+
[
InceptionV3 with 74% accuracy (similar to the official code)
](
Inception/inceptionv3.py
)
+
[
Fully-convolutional Network for Holistically-Nested Edge Detection(HED)
](
HED
)
+
[
Spatial Transformer Networks on MNIST addition
](
SpatialTransformer
)
+
[
Visualize
Saliency Maps by Guided ReLU
](
Saliency
)
+
[
Similarity
L
earning on MNIST
](
SimilarityLearning
)
+
[
Visualize
CNN saliency maps
](
Saliency
)
+
[
Similarity
l
earning on MNIST
](
SimilarityLearning
)
+
Load a pre-trained
[
AlexNet
](
load-alexnet.py
)
or
[
VGG16
](
load-vgg16.py
)
model.
+
Load a pre-trained
[
Convolutional Pose Machines
](
ConvolutionalPoseMachines/
)
.
...
...
examples/Saliency/CAM-demo.jpg
0 → 100644
View file @
d6c2d6b3
42.7 KB
examples/Saliency/CAM-resnet.py
0 → 100755
View file @
d6c2d6b3
This diff is collapsed.
Click to expand it.
examples/Saliency/README.md
View file @
d6c2d6b3
## Visualize Saliency Maps
## Visualize Saliency Maps
& Class Activation Maps
Implement the Guided-ReLU visualization used in the paper:
*
[
Striving for Simplicity: The All Convolutional Net
](
https://arxiv.org/abs/1412.6806
)
And the class activation mapping (CAM) visualization proposed in the paper:
*
[
Learning Deep Features for Discriminative Localization
](
http://cnnlocalization.csail.mit.edu/
)
## Saliency Maps
`saliency-maps.py`
takes an image, and produce its saliency map by running a ResNet-50 and backprop its maximum
activations back to the input image space.
Similar techinques can be used to visualize the concept learned by each filter in the network.
...
...
@@ -23,3 +29,21 @@ Left to right:
+
the magnitude blended with the original image
+
positive correlated pixels (keep original color)
+
negative correlated pixels (keep original color)
## CAM
`CAM-resnet.py`
fine-tune a variant of ResNet to have 2x larger last-layer feature maps, then produce CAM visualizations.
Usage:
1.
Fine tune or retrain the ResNet:
```
bash
./CAM-resnet.py
--data
/path/to/imagenet
[
--load
ImageNet-ResNet18.npy]
[
--gpu
0,1,2,3]
```
Pretrained and fine-tuned ResNet can be downloaded
[
here
](
https://drive.google.com/open?id=0B9IPQTvr2BBkTXBlZmh1cmlnQ0k
)
and
[
here
](
https://drive.google.com/open?id=0B9IPQTvr2BBkQk9qcmtGSERlNUk
)
.
2.
Generate CAM on ImageNet validation set:
```
bash
./CAM-resnet.py
--data
/path/to/imagenet
--load
ImageNet-ResNet18-2xGAP.npy
--cam
```
<p
align=
"center"
>
<img
src=
"./CAM-demo.jpg"
width=
"900"
>
</p>
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment