Commit 743dc730 authored by Yuxin Wu's avatar Yuxin Wu

fix batchnorm

parent f2ae63d2
...@@ -4,7 +4,7 @@ ...@@ -4,7 +4,7 @@
Apart from the actual training iterations that minimizes the cost, Apart from the actual training iterations that minimizes the cost,
you almost surely would like to do something else during training. you almost surely would like to do something else during training.
Callbacks are such an interface to describe what to do besides the Callbacks are such an interface to describe what to do besides the
training iteration defined by the trainers. training iterations defined by the trainers.
There are several places where you might want to do something else: There are several places where you might want to do something else:
...@@ -14,39 +14,39 @@ There are several places where you might want to do something else: ...@@ -14,39 +14,39 @@ There are several places where you might want to do something else:
* Between epochs (e.g. save the model, run some validation) * Between epochs (e.g. save the model, run some validation)
* After the training (e.g. send the model somewhere, send a message to your phone) * After the training (e.g. send the model somewhere, send a message to your phone)
By writing callbacks to implement these tasks, you can reuse them as long as By writing callbacks to implement these tasks, you can reuse the code as long as
you're using tensorpack trainers. For example, these are the callbacks I used when training you're using tensorpack trainers. For example, these are the callbacks I used when training
a ResNet: a ResNet:
```python ```python
TrainConfig( TrainConfig(
# ... # ...
callbacks=[ callbacks=[
# save the model every epoch # save the model every epoch
ModelSaver(), ModelSaver(),
# run inference on another Dataflow every epoch, compute top1/top5 classification error and save them # run inference on another Dataflow every epoch, compute top1/top5 classification error and save them
InferenceRunner(dataset_val, [ InferenceRunner(dataset_val, [
ClassificationError('wrong-top1', 'val-error-top1'), ClassificationError('wrong-top1', 'val-error-top1'),
ClassificationError('wrong-top5', 'val-error-top5')]), ClassificationError('wrong-top5', 'val-error-top5')]),
# schedule the learning rate based on epoch number # schedule the learning rate based on epoch number
ScheduledHyperParamSetter('learning_rate', ScheduledHyperParamSetter('learning_rate',
[(30, 1e-2), (60, 1e-3), (85, 1e-4), (95, 1e-5)]), [(30, 1e-2), (60, 1e-3), (85, 1e-4), (95, 1e-5)]),
# allow manually setting the learning rate during training # allow manually setting the learning rate during training
HumanHyperParamSetter('learning_rate'), HumanHyperParamSetter('learning_rate'),
# send validation error to my phone through pushbullet # send validation error to my phone through pushbullet
SendStat('curl -u your_id_xxx: https://api.pushbullet.com/v2/pushes \\ SendStat('curl -u your_id_xxx: https://api.pushbullet.com/v2/pushes \\
-d type=note -d title="validation error" \\ -d type=note -d title="validation error" \\
-d body={val-error-top1} > /dev/null 2>&1', -d body={val-error-top1} > /dev/null 2>&1',
'val-error-top1') 'val-error-top1')
], ],
extra_callbacks=[ # these are already enabled by default extra_callbacks=[ # these are already enabled by default
# maintain and summarize moving average of some tensors (e.g. training loss, training error) # maintain and summarize moving average of some tensors (e.g. training loss, training error)
MovingAverageSummary(), MovingAverageSummary(),
# draw a nice progress bar # draw a nice progress bar
ProgressBar(), ProgressBar(),
# print all the statistics I've created and scalar tensors I've summarized # print all the statistics I've created and scalar tensors I've summarized
StatPrinter(), StatPrinter(),
] ]
) )
``` ```
......
...@@ -5,14 +5,15 @@ ...@@ -5,14 +5,15 @@
Training is basically **running something again and again**. Training is basically **running something again and again**.
Tensorpack base trainer implements the logic of *running the iteration*, Tensorpack base trainer implements the logic of *running the iteration*,
and other trainers implement what the iteration is. and other trainers implement *what the iteration is*.
Most neural network training tasks are single-cost optimization. Most neural network training tasks are single-cost optimization.
Tensorpack provides some trainer implementations for such tasks. Tensorpack provides some trainer implementations for such tasks.
These trainers will by default minimizes `ModelDesc.cost`, These trainers will by default minimizes `ModelDesc.cost`,
therefore you can use these trainers as long as you set `self.cost` in `ModelDesc._build_graph()`. therefore you can use these trainers as long as you set `self.cost` in `ModelDesc._build_graph()`,
as did in most examples.
The existing trainers were implemented with a TensorFlow queue to prefetch and buffer Most existing trainers were implemented with a TensorFlow queue to prefetch and buffer
training data, which is significantly faster than training data, which is significantly faster than
a naive `sess.run(..., feed_dict={...})`. a naive `sess.run(..., feed_dict={...})`.
There are also multi-GPU trainers which includes the logic of data-parallel multi-GPU training, There are also multi-GPU trainers which includes the logic of data-parallel multi-GPU training,
...@@ -23,11 +24,11 @@ To use trainers, pass a `TrainConfig` to configure them: ...@@ -23,11 +24,11 @@ To use trainers, pass a `TrainConfig` to configure them:
````python ````python
config = TrainConfig( config = TrainConfig(
dataflow=my_dataflow, dataflow=my_dataflow,
optimizer=tf.train.AdamOptimizer(0.01), optimizer=tf.train.AdamOptimizer(0.01),
callbacks=[...] callbacks=[...]
model=MyModel() model=MyModel()
) )
# start training: # start training:
# SimpleTrainer(config).train() # SimpleTrainer(config).train()
...@@ -47,7 +48,7 @@ Some trainer takes data from a TensorFlow reading pipeline instead of a Dataflow ...@@ -47,7 +48,7 @@ Some trainer takes data from a TensorFlow reading pipeline instead of a Dataflow
([PTB example](../examples/PennTreebank)). ([PTB example](../examples/PennTreebank)).
## Write trainers ## Write a trainer
The existing trainers should be enough for single-cost optimization tasks. If you The existing trainers should be enough for single-cost optimization tasks. If you
want to do something inside the trainer, consider writing it as a callback, or want to do something inside the trainer, consider writing it as a callback, or
......
...@@ -22,8 +22,8 @@ from timitdata import TIMITBatch ...@@ -22,8 +22,8 @@ from timitdata import TIMITBatch
BATCH = 64 BATCH = 64
NLAYER = 2 NLAYER = 2
HIDDEN = 128 HIDDEN = 128
NR_CLASS = 61 + 1 NR_CLASS = 61 + 1 # 61 phoneme + epsilon
FEATUREDIM = 39 FEATUREDIM = 39 # MFCC feature dimension
class Model(ModelDesc): class Model(ModelDesc):
...@@ -100,8 +100,9 @@ def get_config(ds_train, ds_test): ...@@ -100,8 +100,9 @@ def get_config(ds_train, ds_test):
StatMonitorParamSetter('learning_rate', 'error', StatMonitorParamSetter('learning_rate', 'error',
lambda x: x * 0.2, 0, 5), lambda x: x * 0.2, 0, 5),
HumanHyperParamSetter('learning_rate'), HumanHyperParamSetter('learning_rate'),
PeriodicCallback( PeriodicTrigger(
InferenceRunner(ds_test, [ScalarStats('error')]), 2), InferenceRunner(ds_test, [ScalarStats('error')]),
every_k_epochs=2),
], ],
model=Model(), model=Model(),
steps_per_epoch=steps_per_epoch, steps_per_epoch=steps_per_epoch,
......
...@@ -97,7 +97,6 @@ def eval_model_multithread(cfg, nr_eval): ...@@ -97,7 +97,6 @@ def eval_model_multithread(cfg, nr_eval):
class Evaluator(Callback): class Evaluator(Callback):
def __init__(self, nr_eval, input_names, output_names): def __init__(self, nr_eval, input_names, output_names):
self.eval_episode = nr_eval self.eval_episode = nr_eval
self.input_names = input_names self.input_names = input_names
......
...@@ -102,23 +102,29 @@ class RandomResize(ImageAugmentor): ...@@ -102,23 +102,29 @@ class RandomResize(ImageAugmentor):
yrange (tuple): (min, max) range of scaling ratio for h yrange (tuple): (min, max) range of scaling ratio for h
minimum (tuple): (xmin, ymin). avoid scaling down too much. minimum (tuple): (xmin, ymin). avoid scaling down too much.
aspect_ratio_thres (float): discard samples which change aspect ratio aspect_ratio_thres (float): discard samples which change aspect ratio
larger than this threshold. larger than this threshold. Set to 0 to keep aspect ratio.
interp: cv2 interpolation method interp: cv2 interpolation method
""" """
super(RandomResize, self).__init__() super(RandomResize, self).__init__()
assert aspect_ratio_thres >= 0
if aspect_ratio_thres == 0:
assert xrange == yrange
self._init(locals()) self._init(locals())
def _get_augment_params(self, img): def _get_augment_params(self, img):
cnt = 0 cnt = 0
while True: while True:
sx = self._rand_range(*self.xrange) sx = self._rand_range(*self.xrange)
sy = self._rand_range(*self.yrange) if self.aspect_ratio_thres == 0:
sy = sx
else:
sy = self._rand_range(*self.yrange)
destX = int(max(sx * img.shape[1], self.minimum[0])) destX = int(max(sx * img.shape[1], self.minimum[0]))
destY = int(max(sy * img.shape[0], self.minimum[1])) destY = int(max(sy * img.shape[0], self.minimum[1]))
oldr = img.shape[1] * 1.0 / img.shape[0] oldr = img.shape[1] * 1.0 / img.shape[0]
newr = destX * 1.0 / destY newr = destX * 1.0 / destY
diff = abs(newr - oldr) / oldr diff = abs(newr - oldr) / oldr
if diff <= self.aspect_ratio_thres: if diff <= self.aspect_ratio_thres + 1e-7:
return (destX, destY) return (destX, destY)
cnt += 1 cnt += 1
if cnt > 50: if cnt > 50:
......
...@@ -97,7 +97,7 @@ def BatchNormV1(x, use_local_stat=None, decay=0.9, epsilon=1e-5): ...@@ -97,7 +97,7 @@ def BatchNormV1(x, use_local_stat=None, decay=0.9, epsilon=1e-5):
@layer_register(log_shape=False) @layer_register(log_shape=False)
def BatchNormV2(x, use_local_stat=None, decay=0.9, epsilon=1e-5): def BatchNorm(x, use_local_stat=None, decay=0.9, epsilon=1e-5):
""" """
Batch normalization layer, as described in the paper: Batch normalization layer, as described in the paper:
`Batch Normalization: Accelerating Deep Network Training by `Batch Normalization: Accelerating Deep Network Training by
...@@ -188,6 +188,3 @@ def BatchNormV2(x, use_local_stat=None, decay=0.9, epsilon=1e-5): ...@@ -188,6 +188,3 @@ def BatchNormV2(x, use_local_stat=None, decay=0.9, epsilon=1e-5):
return tf.identity(xn, name='output') return tf.identity(xn, name='output')
else: else:
return tf.identity(xn, name='output') return tf.identity(xn, name='output')
BatchNorm = BatchNormV2
...@@ -57,7 +57,9 @@ class LinearWrap(object): ...@@ -57,7 +57,9 @@ class LinearWrap(object):
return f return f
else: else:
if layer_name != 'tf': if layer_name != 'tf':
logger.warn("You're calling LinearWrap.__getattr__ with something neither a layer nor 'tf'!") logger.warn(
"You're calling LinearWrap.__getattr__ with {}:"
" neither a layer nor 'tf'!".format(layer_name))
assert isinstance(layer, ModuleType) assert isinstance(layer, ModuleType)
return LinearWrap._TFModuleFunc(layer, self._t) return LinearWrap._TFModuleFunc(layer, self._t)
......
...@@ -188,6 +188,9 @@ class TensorInput(FeedfreeInput): ...@@ -188,6 +188,9 @@ class TensorInput(FeedfreeInput):
size(int): size of this input. Use None to leave it undefined. size(int): size of this input. Use None to leave it undefined.
""" """
self.get_tensor_fn = get_tensor_fn self.get_tensor_fn = get_tensor_fn
if size is not None:
size = int(size)
assert size > 0
self._size = size self._size = size
def size(self): def size(self):
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment