Commit e78e2e1e authored by Yuxin Wu's avatar Yuxin Wu

update docs

parent 63bdc43b
......@@ -20,7 +20,7 @@ Then it is a good time to open an issue.
1. Learn `tf.Print`.
2. Know [DumpTensors](../modules/callbacks.html#tensorpack.callbacks.DumpTensors[]),
2. Know [DumpTensors](../modules/callbacks.html#tensorpack.callbacks.DumpTensors),
[ProcessTensors](../modules/callbacks.html#tensorpack.callbacks.ProcessTensors) callbacks.
And it's also easy to write your own version of them.
......
......@@ -8,7 +8,8 @@ The default logging behavior should be good enough for normal use cases, so you
This is how TensorFlow summaries eventually get logged/saved/printed:
1. __What to Log__: When you call `tf.summary.xxx` in your graph code, TensorFlow adds an op to
1. __What to Log__: Define what you want to log in the graph.
When you call `tf.summary.xxx` in your graph code, TensorFlow adds an op to
`tf.GraphKeys.SUMMARIES` collection (by default).
2. __When to Log__: [MergeAllSummaries](../modules/callbacks.html#tensorpack.callbacks.MergeAllSummaries)
callback is in the [default callbacks](../modules/train.html#tensorpack.train.DEFAULT_CALLBACKS).
......@@ -25,8 +26,22 @@ This is how TensorFlow summaries eventually get logged/saved/printed:
All the "what, when, where" can be customized in either the graph or with the callbacks/monitors setting.
Since TF summaries are evaluated infrequently (every epoch) by default, if the content is data-dependent, the values
could have high variance. To address this issue, you can:
The design goal to disentangle "what, when, where" is to make components reusable.
Suppose you have `M` items to log
(possibly from differently places, not necessarily the graph)
and `N` backends to log your data to, you
automatically obtain all the `MxN` combinations.
Despite of that, if you only care about logging one specific item (e.g. for
debugging purpose), you can check out the
[FAQ](http://tensorpack.readthedocs.io/tutorial/faq.html#how-to-print-dump-intermediate-results-in-training)
for easier options.
### Noisy TensorFlow Summaries
Since TF summaries are evaluated infrequently (every epoch) by default,
if the content is data-dependent, the values could have high variance.
To address this issue, you can:
1. Change "When to Log": log more frequently, but note that certain summaries can be expensive to
log. You may want to use a separate collection for frequent logging.
2. Change "What to Log": you can call
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment