Commit 76cbc245 authored by Yuxin Wu's avatar Yuxin Wu

update readme

parent afab50e6
# tensorpack # tensorpack
Neural Network Toolbox on TensorFlow Neural Network Toolbox on TensorFlow
In development. API might change a bit. In development but usable. API might change a bit.
See [examples](https://github.com/ppwwyyxx/tensorpack/tree/master/examples) to learn.
See some interesting [examples](https://github.com/ppwwyyxx/tensorpack/tree/master/examples) to learn.
## Features: ## Features:
Focused on modularity: Focused on modularity:
+ Models has Scoped abstraction of common models. + Models has scoped abstraction of common models.
+ Dataflow defines data preprocessing in pure Python. + Dataflow defines data preprocessing in pure Python.
+ Callbacks systems to control training behavior. + Callbacks systems controls training behavior.
...@@ -11,4 +11,4 @@ To run: ...@@ -11,4 +11,4 @@ To run:
./DQN.py --rom breakout.rom --gpu 0 ./DQN.py --rom breakout.rom --gpu 0
``` ```
A demo trained with Double-DQN is available at [youtube](https://youtu.be/o21mddZtE5Y) A demo trained with Double-DQN on breakout is available at [youtube](https://youtu.be/o21mddZtE5Y).
...@@ -156,7 +156,7 @@ class ScalarStats(Inferencer): ...@@ -156,7 +156,7 @@ class ScalarStats(Inferencer):
class ClassificationError(Inferencer): class ClassificationError(Inferencer):
""" """
Validate the accuracy from a `wrong` variable Compute classification error from a `wrong` variable
The `wrong` variable is supposed to be an integer equal to the number of failed samples in this batch. The `wrong` variable is supposed to be an integer equal to the number of failed samples in this batch.
You can use `tf.nn.in_top_k` to record top-k error as well. You can use `tf.nn.in_top_k` to record top-k error as well.
...@@ -164,12 +164,12 @@ class ClassificationError(Inferencer): ...@@ -164,12 +164,12 @@ class ClassificationError(Inferencer):
This callback produce the "true" error, This callback produce the "true" error,
taking account of the fact that batches might not have the same size in taking account of the fact that batches might not have the same size in
testing (because the size of test set might not be a multiple of batch size). testing (because the size of test set might not be a multiple of batch size).
In theory, the result could be different from what produced by ValidationStatPrinter. Therefore the result is different from averaging the error rate of each batch.
""" """
def __init__(self, wrong_var_name='wrong:0', summary_name='validation_error'): def __init__(self, wrong_var_name='wrong:0', summary_name='validation_error'):
""" """
:param wrong_var_name: name of the `wrong` variable :param wrong_var_name: name of the `wrong` variable
:param summary_name: an optional prefix for logging :param summary_name: the name for logging
""" """
self.wrong_var_name = wrong_var_name self.wrong_var_name = wrong_var_name
self.summary_name = summary_name self.summary_name = summary_name
...@@ -189,6 +189,9 @@ class ClassificationError(Inferencer): ...@@ -189,6 +189,9 @@ class ClassificationError(Inferencer):
self.trainer.write_scalar_summary(self.summary_name, self.err_stat.accuracy) self.trainer.write_scalar_summary(self.summary_name, self.err_stat.accuracy)
class BinaryClassificationStats(Inferencer): class BinaryClassificationStats(Inferencer):
""" Compute precision/recall in binary classification, given the
prediction vector and the label vector.
"""
def __init__(self, pred_var_name, label_var_name, summary_prefix='val'): def __init__(self, pred_var_name, label_var_name, summary_prefix='val'):
""" """
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment