Commit d6723566 authored by Yuxin Wu's avatar Yuxin Wu

update readme

parent 8843f7ff
...@@ -11,7 +11,7 @@ You can actually train them and reproduce the performance... not just to see how ...@@ -11,7 +11,7 @@ You can actually train them and reproduce the performance... not just to see how
+ [InceptionV3 on ImageNet](examples/Inception/inceptionv3.py) + [InceptionV3 on ImageNet](examples/Inception/inceptionv3.py)
+ [Fully-convolutional Network for Holistically-Nested Edge Detection](examples/HED) + [Fully-convolutional Network for Holistically-Nested Edge Detection](examples/HED)
+ [Spatial Transformer Networks on MNIST addition](examples/SpatialTransformer) + [Spatial Transformer Networks on MNIST addition](examples/SpatialTransformer)
+ [Generative Adversarial Networks](examples/GAN) + [Deep Convolutional Generative Adversarial Networks](examples/GAN)
+ [DQN variants on Atari games](examples/Atari2600) + [DQN variants on Atari games](examples/Atari2600)
+ [Asynchronous Advantage Actor-Critic(A3C) with demos on OpenAI Gym](examples/OpenAIGym) + [Asynchronous Advantage Actor-Critic(A3C) with demos on OpenAI Gym](examples/OpenAIGym)
+ [char-rnn language model](examples/char-rnn) + [char-rnn language model](examples/char-rnn)
......
...@@ -3,7 +3,7 @@ Code and model for the paper: ...@@ -3,7 +3,7 @@ Code and model for the paper:
[DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients](http://arxiv.org/abs/1606.06160), by Zhou et al. [DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients](http://arxiv.org/abs/1606.06160), by Zhou et al.
We hosted a demo at CVPR16 on behalf of Megvii, Inc, running a real-time 1/4-VGG size DoReFa-Net on ARM and half-VGG size DoReFa-Net on FPGA. We hosted a demo at CVPR16 on behalf of Megvii, Inc, running a real-time 1/4-VGG size DoReFa-Net on ARM and half-VGG size DoReFa-Net on FPGA.
We're not planning to release those runtime bit-op libraries for now. In these examples, bit operations are run in float32. We're not planning to release those runtime bit-op libraries for now. In this repo, bit operations are run in float32.
Pretrained model for 1-2-6-AlexNet is available at Pretrained model for 1-2-6-AlexNet is available at
[google drive](https://drive.google.com/a/%20megvii.com/folderview?id=0B308TeQzmFDLa0xOeVQwcXg1ZjQ). [google drive](https://drive.google.com/a/%20megvii.com/folderview?id=0B308TeQzmFDLa0xOeVQwcXg1ZjQ).
......
...@@ -126,7 +126,7 @@ def sample(model_path): ...@@ -126,7 +126,7 @@ def sample(model_path):
o = o[:,:,:,::-1] o = o[:,:,:,::-1]
viz = next(build_patch_list(o, nr_row=10, nr_col=10, viz=True)) viz = next(build_patch_list(o, nr_row=10, nr_col=10, viz=True))
def interp(model_path): def vec(model_path):
func = OfflinePredictor(PredictConfig( func = OfflinePredictor(PredictConfig(
session_init=get_model_loader(model_path), session_init=get_model_loader(model_path),
model=Model(), model=Model(),
...@@ -149,7 +149,7 @@ if __name__ == '__main__': ...@@ -149,7 +149,7 @@ if __name__ == '__main__':
parser.add_argument('--gpu', help='comma separated list of GPU(s) to use.') parser.add_argument('--gpu', help='comma separated list of GPU(s) to use.')
parser.add_argument('--load', help='load model') parser.add_argument('--load', help='load model')
parser.add_argument('--sample', action='store_true', help='run sampling') parser.add_argument('--sample', action='store_true', help='run sampling')
parser.add_argument('--interp', action='store_true', help='run interpolation') parser.add_argument('--vec', action='store_true', help='run vec arithmetic demo')
parser.add_argument('--data', help='`image_align_celeba` directory of the celebA dataset') parser.add_argument('--data', help='`image_align_celeba` directory of the celebA dataset')
global args global args
args = parser.parse_args() args = parser.parse_args()
...@@ -157,8 +157,8 @@ if __name__ == '__main__': ...@@ -157,8 +157,8 @@ if __name__ == '__main__':
os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu os.environ['CUDA_VISIBLE_DEVICES'] = args.gpu
if args.sample: if args.sample:
sample(args.load) sample(args.load)
elif args.interp: elif args.vec:
interp(args.load) vec(args.load)
else: else:
assert args.data assert args.data
config = get_config() config = get_config()
......
...@@ -2,7 +2,9 @@ ...@@ -2,7 +2,9 @@
Reproduce DCGAN following the setup in [dcgan.torch](https://github.com/soumith/dcgan.torch). Reproduce DCGAN following the setup in [dcgan.torch](https://github.com/soumith/dcgan.torch).
Samples from CelebA dataset: Play with the [pretrained model](https://drive.google.com/drive/folders/0B9IPQTvr2BBkLUF2M0RXU1NYSkE?usp=sharing) on CelebA face dataset.
Generated samples:
![sample](demo/CelebA-samples.jpg) ![sample](demo/CelebA-samples.jpg)
......
...@@ -11,6 +11,7 @@ Training examples with __reproducible__ and meaningful performance. ...@@ -11,6 +11,7 @@ Training examples with __reproducible__ and meaningful performance.
+ [ResNet for ImageNet/Cifar10/SVHN](ResNet) + [ResNet for ImageNet/Cifar10/SVHN](ResNet)
+ [Holistically-Nested Edge Detection](HED) + [Holistically-Nested Edge Detection](HED)
+ [Spatial Transformer Networks on MNIST addition](SpatialTransformer) + [Spatial Transformer Networks on MNIST addition](SpatialTransformer)
+ [DisturbLabel, because I don't believe the paper](DisturbLabel) + [Generative Adversarial Networks](GAN)
+ [DisturbLabel -- I don't believe the paper](DisturbLabel)
+ Reinforcement learning (DQN, A3C) on [Atari games](Atari2600) and [demos on OpenAI Gym](OpenAIGym). + Reinforcement learning (DQN, A3C) on [Atari games](Atari2600) and [demos on OpenAI Gym](OpenAIGym).
+ [char-rnn for fun](char-rnn) + [char-rnn for fun](char-rnn)
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment