Commit 930af0b6 authored by Yuxin Wu's avatar Yuxin Wu

typos & clean-ups (including #358)

parent ebf1d570
......@@ -51,7 +51,6 @@ PREDICT_BATCH_SIZE = 15 # batch for efficient forward
SIMULATOR_PROC = 50
PREDICTOR_THREAD_PER_GPU = 3
PREDICTOR_THREAD = None
EVALUATE_PROC = min(multiprocessing.cpu_count() // 2, 20)
NUM_ACTIONS = None
ENV_NAME = None
......
......@@ -18,7 +18,8 @@ from GAN import GANModelDesc, GANTrainer, MultiGPUGANTrainer
Boundary Equilibrium GAN.
See the docstring in DCGAN.py for usage.
A pretrained model on CelebA is at https://drive.google.com/open?id=0B5uDfUQ1JTglUmgyZV8zQmNOTVU
A pretrained model on CelebA is at
https://drive.google.com/open?id=0B5uDfUQ1JTglUmgyZV8zQmNOTVU
"""
......
......@@ -20,7 +20,10 @@ from GAN import GANTrainer, GANModelDesc
"""
1. Download the dataset following the original project: https://github.com/junyanz/CycleGAN#train
2. ./CycleGAN.py --data /path/to/datasets/horse2zebra
Training and testing visuliazations will be in tensorboard.
Training and testing visualizations will be in tensorboard.
This implementation doesn't use fake sample buffer.
It's not hard to add but I didn't observe any difference with it.
"""
SHAPE = 256
......
......@@ -37,7 +37,7 @@ Reproduce DCGAN following the setup in [dcgan.torch](https://github.com/soumith/
## Image2Image.py
Image-to-Image following the setup in [pix2pix](https://github.com/phillipi/pix2pix).
Image-to-Image translation following the setup in [pix2pix](https://github.com/phillipi/pix2pix).
For example, with the cityscapes dataset, it learns to generate semantic segmentation map of urban scene:
......@@ -71,5 +71,6 @@ Some BEGAN samples:
## CycleGAN.py, DiscoGAN-CelebA.py
Reproduce CycleGAN with the original datasets, and DiscoGAN on CelebA. They are pretty much the same idea with different architecture.
CycleGAN horse-to-zebra in tensorboard:
![cyclegan-sample](demo/CycleGAN-horse2zebra.jpg)
......@@ -27,7 +27,7 @@ To start training:
```bash
./hed.py --load vgg16.npy
```
It takes about 100k steps (~10 hour on a TitanX) to reach a reasonable performance.
It takes about 100k steps (~10 hours on a TitanX) to reach a reasonable performance.
To inference (produce a heatmap at each level at out*.png):
```bash
......@@ -41,3 +41,4 @@ cat train_log/hed/stat.json | jq '.[] |
"\(.xentropy1)\t\(.xentropy2)\t\(.xentropy3)\t\(.xentropy4)\t\(.xentropy5)\t\(.xentropy6)"' -r | \
tpk-plot-point --legend 1,2,3,4,5,final --decay 0.8
```
Or just open tensorboard.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment