Commit 8d668003 authored by Yuxin Wu's avatar Yuxin Wu

docs change

parent 228aead3
......@@ -15,6 +15,8 @@ There are two ways to do inference during training.
This will further support prefetch & data-parallel inference.
More details to come.
In both methods, your tower function will be called again, with `TowerContext.is_training==False`.
You can build a different graph using this predicate.
## Inference After Training
......
......@@ -91,7 +91,7 @@ def detect_one_image(img, model_func):
return results
def eval_on_dataflow(df, detect_func):
def eval_coco(df, detect_func):
"""
Args:
df: a DataFlow which produces (image, image_id)
......
......@@ -39,7 +39,7 @@ from viz import (
draw_predictions, draw_final_outputs)
from common import print_config
from eval import (
eval_on_dataflow, detect_one_image, print_evaluation_scores, DetectionResult)
eval_coco, detect_one_image, print_evaluation_scores, DetectionResult)
import config
......@@ -280,7 +280,7 @@ def visualize(model_path, nr_visualize=50, output_dir='output'):
def offline_evaluate(pred_func, output_file):
df = get_eval_dataflow()
all_results = eval_on_dataflow(
all_results = eval_coco(
df, lambda img: detect_one_image(img, pred_func))
with open(output_file, 'w') as f:
json.dump(all_results, f)
......@@ -309,7 +309,7 @@ class EvalCallback(Callback):
self.epochs_to_eval.add(self.trainer.max_epoch)
def _eval(self):
all_results = eval_on_dataflow(self.df, lambda img: detect_one_image(img, self.pred))
all_results = eval_coco(self.df, lambda img: detect_one_image(img, self.pred))
output_file = os.path.join(
logger.get_logger_dir(), 'outputs{}.json'.format(self.global_step))
with open(output_file, 'w') as f:
......
......@@ -3,12 +3,14 @@
Training examples with __reproducible performance__.
__"Reproduce" should always means reproduce performance__.
Reproducing a method is usually easy, but you don't know whether you've made mistakes, because wrong code will often appear to work.
Reproducible performance results are what really matters.
Reproducing __performance__ results is what really matters, and is something that's hardly seen on github.
See [Unawareness of Deep Learning Mistakes](https://medium.com/@ppwwyyxx/unawareness-of-deep-learning-mistakes-d5b5774da0ba).
## Getting Started:
These examples don't have meaningful performance numbers. They are supposed to be just demos.
+ [An illustrative mnist example with explanation of the framework](mnist-convnet.py)
+ The same mnist example using [tf-slim](mnist-tfslim.py), and [with weights visualizations](mnist-visualizations.py)
+ A tiny [Cifar ConvNet](cifar-convnet.py) and [SVHN ConvNet](svhn-digit-convnet.py)
......
......@@ -223,7 +223,7 @@ class QueueInput(FeedfreeInput):
with self.cached_name_scope():
# in TF there is no API to get queue capacity, so we can only summary the size
size = tf.cast(self.queue.size(), tf.float32, name='queue_size')
size_ema_op = add_moving_summary(size, collection=None)[0].op
size_ema_op = add_moving_summary(size, collection=None, decay=0.5)[0].op
return RunOp(
lambda: size_ema_op,
run_before=False,
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment