Commit 2e490884 authored by Yuxin Wu's avatar Yuxin Wu

Direct model links to models.tensorpack.com

parent 5f56f6a5
......@@ -16,7 +16,7 @@ wget https://github.com/shihenw/convolutional-pose-machines-release/raw/master/m
python -m tensorpack.utils.loadcaffe pose_deploy_resize.prototxt pose_iter_320000.caffemodel CPM-original.npy
```
Or you can download the converted model from [model zoo](https://drive.google.com/open?id=0B9IPQTvr2BBkRU8zM2w2ZGh3eU0).
Or you can download the converted model from [model zoo](http://models.tensorpack.com/caffe/).
Run it on an image, and produce `output.jpg`:
```
......
......@@ -46,6 +46,6 @@ Watch the agent play:
```
./DQN.py --rom breakout.bin --task play --load path/to/model
```
A pretrained model on breakout can be downloaded [here](https://drive.google.com/open?id=0B9IPQTvr2BBkN1Jrei1xWW0yR28).
A pretrained model on breakout can be downloaded [here](http://models.tensorpack.com/DeepQNetwork/).
A3C code and models for Atari games in OpenAI Gym are released in [examples/A3C-Gym](../A3C-Gym)
......@@ -7,7 +7,7 @@ We're not planning to release our C++ runtime for bit-operations.
In this repo, bit operations are performed through `tf.float32`.
Pretrained model for (1,4,32)-ResNet18 and (1,2,6)-AlexNet are available at
[google drive](https://drive.google.com/a/megvii.com/folderview?id=0B308TeQzmFDLa0xOeVQwcXg1ZjQ).
[tensorpack model zoo](http://models.tensorpack.com/DoReFa-Net/).
They're provided in the format of numpy dictionary, so it should be very easy to port into other applications.
The __binary-weight 4-bit-activation ResNet-18__ model has 59.2% top-1 validation accuracy.
......
......@@ -37,7 +37,7 @@ To predict on an image (and show output in a window):
./train.py --predict input.jpg --load /path/to/model
```
To evaluate the performance (pretrained models can be downloaded in [model zoo](https://drive.google.com/open?id=1J0xuDAuyOWiuJRm2LfGoz5PUv9_dKuxq):
To evaluate the performance (pretrained models can be downloaded in [model zoo](http://models.tensorpack.com/FasterRCNN):
```
./train.py --evaluate output.json --load /path/to/model
```
......
......@@ -19,8 +19,7 @@ from GAN import GANModelDesc, GANTrainer, MultiGPUGANTrainer
Boundary Equilibrium GAN.
See the docstring in DCGAN.py for usage.
A pretrained model on CelebA is at
https://drive.google.com/open?id=0B5uDfUQ1JTglUmgyZV8zQmNOTVU
A pretrained model on CelebA is at http://models.tensorpack.com/GAN/
"""
......
......@@ -24,7 +24,7 @@ To train:
To visualize:
./ConditionalGAN-mnist.py --sample --load path/to/model
A pretrained model is at https://drive.google.com/open?id=0B9IPQTvr2BBkLUF2M0RXU1NYSkE
A pretrained model is at http://models.tensorpack.com/GAN/
"""
BATCH = 128
......
......@@ -31,7 +31,7 @@ from GAN import GANTrainer, RandomZData, GANModelDesc
You can also train on other images (just use any directory of jpg files in
`--data`). But you may need to change the preprocessing.
A pretrained model on CelebA is at https://drive.google.com/open?id=0B9IPQTvr2BBkLUF2M0RXU1NYSkE
A pretrained model on CelebA is at http://models.tensorpack.com/GAN/
"""
# global vars
......
......@@ -25,7 +25,7 @@ To train:
To visualize:
./InfoGAN-mnist.py --sample --load path/to/model
A pretrained model is at https://drive.google.com/open?id=0B9IPQTvr2BBkLUF2M0RXU1NYSkE
A pretrained model is at http://models.tensorpack.com/GAN/
"""
BATCH = 128
......
......@@ -33,7 +33,7 @@ To inference (produce a heatmap at each level at out*.png):
```bash
./hed.py --load pretrained.model --run a.jpg
```
Models I trained can be downloaded [here](https://drive.google.com/drive/folders/0B5uDfUQ1JTgldzVLaDBERG9zQmc?usp=sharing).
Models I trained can be downloaded [here](http://models.tensorpack.com/HED/).
To view the loss curve:
```bash
......
......@@ -39,7 +39,7 @@ Usage:
./CAM-resnet.py --data /path/to/imagenet [--load ImageNet-ResNet18-Preact.npz] [--gpu 0,1,2,3]
```
Pretrained and fine-tuned ResNet can be downloaded
[here](https://drive.google.com/open?id=0B9IPQTvr2BBkTXBlZmh1cmlnQ0k) and [here](https://drive.google.com/open?id=0B9IPQTvr2BBkQk9qcmtGSERlNUk).
[here](https://goo.gl/6XjK9V) and [here](http://models.tensorpack.com/Visualization/).
2. Generate CAM on ImageNet validation set:
```bash
......
......@@ -22,7 +22,7 @@ Train (takes 24 hours on 8 Maxwell TitanX):
./shufflenet.py --data /path/to/ilsvrc/
```
Eval the [pretrained model](https://drive.google.com/open?id=1Q1C2BCkbOK2HfhUB0Yfn_W_F68bqnA6y):
Eval the [pretrained model](http://models.tensorpack.com/ShuffleNet/):
```
./shufflenet.py --eval --data /path/to/ilsvrc --load /path/to/model
```
......@@ -20,7 +20,7 @@ To train (takes about 300 epochs to reach 8.8% error):
./mnist-addition.py
```
To draw the above visualization with [pretrained model](https://drive.google.com/drive/folders/0B9IPQTvr2BBkUWM3X0hDZHJtTmc?usp=sharing):
To draw the above visualization with [pretrained model](http://models.tensorpack.com/SpatialTransformer/):
```bash
./mnist-addition.py --load pretrained.npy --view
```
......@@ -26,6 +26,8 @@ Usage:
PATH/TO/VGG/{VGG_ILSVRC_16_layers_deploy.prototxt,VGG_ILSVRC_16_layers.caffemodel} vgg16.npy
Or download a converted caffe model from http://models.tensorpack.com/caffe/
Then, run it:
./load-vgg16.py --load vgg16.npy --input cat.png
"""
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment