Commit 838a4ba3 authored by Yuxin Wu's avatar Yuxin Wu

update incv3

parent 35f24d40
......@@ -8,6 +8,7 @@ See some interesting [examples](examples) to learn about the framework:
+ [DoReFa-Net: training binary / low bitwidth CNN](examples/DoReFa-Net)
+ [Double-DQN for playing Atari games](examples/Atari2600)
+ [ResNet for Cifar10 classification](examples/ResNet)
+ [IncpetionV3 on ImageNet](examples/Inception/inceptionv3.py)
+ [char-rnn language model](examples/char-rnn)
## Features:
......
......@@ -21,7 +21,7 @@ See "Rethinking the Inception Architecture for Computer Vision", arxiv:1512.0056
This config follows the official inceptionv3 setup (https://github.com/tensorflow/models/tree/master/inception/inception)
with much much fewer lines of code.
It reaches 73.5% single-crop validation accuracy, same as the official code,
It reaches 74.5% single-crop validation accuracy, slightly better than the official code,
and has the same running speed as well.
"""
......
......@@ -9,7 +9,7 @@ Only allow examples with reproducible and meaningful performancce.
+ [Reproduce some reinforcement learning papers](Atari2600)
+ [char-rnn for fun](char-rnn)
+ [DisturbLabel, because I don't believe the paper](DisturbLabel)
+ [DoReFa-Net, binary / low-bitwidth CNN on ImageNet](DoReFa-Net)
+ [DoReFa-Net: binary / low-bitwidth CNN on ImageNet](DoReFa-Net)
+ [GoogleNet-InceptionV1 with 71% accuracy](Inception/inception-bn.py)
+ [GoogleNet-InceptionV3 with 73.5% accuracy (same as the official code)](Inception/inceptionv3.py)
+ [ResNet for Cifar10 with similar accuracy, and for SVHN with state-of-the-art accuracy](ResNet)
+ [GoogleNet-InceptionV3 with 74.5% accuracy (similar to the official code)](Inception/inceptionv3.py)
+ [ResNet for Cifar10 and SVHN](ResNet)
......@@ -77,3 +77,8 @@ def rms(x, name=None):
with tf.name_scope(None): # name already contains the scope
return tf.sqrt(tf.reduce_mean(tf.square(x)), name=name)
return tf.sqrt(tf.reduce_mean(tf.square(x)), name=name)
def get_scalar_var(name, init_value):
return tf.get_variable(name, shape=[],
initializer=tf.constant_initializer(init_value),
trainable=False)
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment