Commit 80a68c59 authored by Yuxin Wu's avatar Yuxin Wu

update docs

parent 42416945
...@@ -32,7 +32,7 @@ It's always better to copy-paste what you observed instead of describing them. ...@@ -32,7 +32,7 @@ It's always better to copy-paste what you observed instead of describing them.
It's always better to paste **as much as possible**, although sometimes a partial log is OK. It's always better to paste **as much as possible**, although sometimes a partial log is OK.
Tensorpack typically saves stdout to its training log. Tensorpack typically saves stdout to its training log.
If stderr is relevant, you can run a command with `CMD 2>&1 | tee logs.txt` If stderr is relevant, you can run a command with `my_command 2>&1 | tee logs.txt`
to save both stdout and stderr to one file. to save both stdout and stderr to one file.
(2) **Other observations, if any:** (2) **Other observations, if any:**
...@@ -53,7 +53,7 @@ We do not answer machine learning questions and it is your responsibility to ...@@ -53,7 +53,7 @@ We do not answer machine learning questions and it is your responsibility to
figure out how to make your models more accurate. figure out how to make your models more accurate.
### 4. Your environment: ### 4. Your environment:
+ Paste the output of this command: `python3 -c 'import tensorpack.tfutils as u; print(u.collect_env_info())'` + Paste the output of this command: `python -c 'import tensorpack.tfutils as u; print(u.collect_env_info())'`
If this command failed, tell us your version of Python/TF/tensorpack. If this command failed, tell us your version of Python/TF/tensorpack.
+ You can install Tensorpack master by `pip install -U git+https://github.com/ppwwyyxx/tensorpack.git` + You can install Tensorpack master by `pip install -U git+https://github.com/ppwwyyxx/tensorpack.git`
and see if your issue is already solved. and see if your issue is already solved.
......
...@@ -69,7 +69,7 @@ monitors=[ # monitors are a special kind of callbacks. these are also ena ...@@ -69,7 +69,7 @@ monitors=[ # monitors are a special kind of callbacks. these are also ena
] ]
``` ```
Notice that callbacks cover every detail of training, ranging from graph operations to the progress bar. You can see from the above snippet, that callbacks cover every detail of training, ranging from graph operations to the progress bar.
This means you can customize every part of the training to your preference, e.g. display something This means you can customize every part of the training to your preference, e.g. display something
different in the progress bar, evaluate part of the summaries at a different frequency, etc. different in the progress bar, evaluate part of the summaries at a different frequency, etc.
Similar concepts also exists in other frameworks, such as Keras callbacks, or Similar concepts also exists in other frameworks, such as Keras callbacks, or
...@@ -80,5 +80,7 @@ These features are not always necessary, but think about how messy the main loop ...@@ -80,5 +80,7 @@ These features are not always necessary, but think about how messy the main loop
were to write these logic together with the loops, and how easy your life will be if you could enable were to write these logic together with the loops, and how easy your life will be if you could enable
these features with just one line when you need them. these features with just one line when you need them.
See [list of callbacks](../modules/callbacks.html)
for a long list of tensorpack builtin callbacks.
See [Write a callback](extend/callback.html) See [Write a callback](extend/callback.html)
for details on how callbacks work, what they can do, and how to write them. for details on how callbacks work, what they can do, and how to write them.
...@@ -39,6 +39,8 @@ Then it is a good time to open an issue. ...@@ -39,6 +39,8 @@ Then it is a good time to open an issue.
## How to freeze some variables in training ## How to freeze some variables in training
1. Learn `tf.stop_gradient`. You can simply use `tf.stop_gradient` in your model code in many situations (e.g. to freeze first several layers). 1. Learn `tf.stop_gradient`. You can simply use `tf.stop_gradient` in your model code in many situations (e.g. to freeze first several layers).
Note that it stops the gradient flow in the current Tensor but your variables may still contribute to the
final loss through other tensors (e.g., weight decay).
2. [varreplace.freeze_variables](../modules/tfutils.html#tensorpack.tfutils.varreplace.freeze_variables) returns a context where variables are freezed. 2. [varreplace.freeze_variables](../modules/tfutils.html#tensorpack.tfutils.varreplace.freeze_variables) returns a context where variables are freezed.
It is implemented by `custom_getter` argument of `tf.variable_scope` -- learn it to gain more control over what & how variables are freezed. It is implemented by `custom_getter` argument of `tf.variable_scope` -- learn it to gain more control over what & how variables are freezed.
......
...@@ -81,6 +81,8 @@ class RunUpdateOps(RunOp): ...@@ -81,6 +81,8 @@ class RunUpdateOps(RunOp):
Be careful when using ``UPDATE_OPS`` if your model contains more than one sub-networks. Be careful when using ``UPDATE_OPS`` if your model contains more than one sub-networks.
Perhaps not all updates are supposed to be executed in every iteration. Perhaps not all updates are supposed to be executed in every iteration.
This callback is one of the :func:`DEFAULT_CALLBACKS()`.
""" """
def __init__(self, collection=None): def __init__(self, collection=None):
......
...@@ -547,7 +547,7 @@ class CometMLMonitor(MonitorBase): ...@@ -547,7 +547,7 @@ class CometMLMonitor(MonitorBase):
Note: Note:
1. comet_ml requires you to `import comet_ml` before importing tensorflow or tensorpack. 1. comet_ml requires you to `import comet_ml` before importing tensorflow or tensorpack.
2. The "automatic output logging" feature will make the training progress bar appear to freeze. 2. The "automatic output logging" feature of comet_ml will make the training progress bar appear to freeze.
Therefore the feature is disabled by default. Therefore the feature is disabled by default.
""" """
def __init__(self, experiment=None, api_key=None, tags=None, **kwargs): def __init__(self, experiment=None, api_key=None, tags=None, **kwargs):
......
...@@ -44,7 +44,10 @@ class TensorPrinter(Callback): ...@@ -44,7 +44,10 @@ class TensorPrinter(Callback):
class ProgressBar(Callback): class ProgressBar(Callback):
""" A progress bar based on tqdm. Enabled by default. """ """ A progress bar based on tqdm.
This callback is one of the :func:`DEFAULT_CALLBACKS()`.
"""
_chief_only = False _chief_only = False
......
...@@ -16,12 +16,13 @@ __all__ = ['MovingAverageSummary', 'MergeAllSummaries', 'SimpleMovingAverage'] ...@@ -16,12 +16,13 @@ __all__ = ['MovingAverageSummary', 'MergeAllSummaries', 'SimpleMovingAverage']
class MovingAverageSummary(Callback): class MovingAverageSummary(Callback):
""" """
This callback is enabled by default.
Maintain the moving average of summarized tensors in every step, Maintain the moving average of summarized tensors in every step,
by ops added to the collection. by ops added to the collection.
Note that it only **maintains** the moving averages by updating Note that it only **maintains** the moving averages by updating
the relevant variables in the graph, the relevant variables in the graph,
the actual summary should be done in other callbacks. the actual summary should be done in other callbacks.
This callback is one of the :func:`DEFAULT_CALLBACKS()`.
""" """
def __init__(self, collection=MOVING_SUMMARY_OPS_KEY, train_op=None): def __init__(self, collection=MOVING_SUMMARY_OPS_KEY, train_op=None):
""" """
...@@ -118,9 +119,10 @@ class MergeAllSummaries_RunWithOp(Callback): ...@@ -118,9 +119,10 @@ class MergeAllSummaries_RunWithOp(Callback):
def MergeAllSummaries(period=0, run_alone=False, key=None): def MergeAllSummaries(period=0, run_alone=False, key=None):
""" """
This callback is enabled by default.
Evaluate all summaries by ``tf.summary.merge_all``, and write them to logs. Evaluate all summaries by ``tf.summary.merge_all``, and write them to logs.
This callback is one of the :func:`DEFAULT_CALLBACKS()`.
Args: Args:
period (int): by default the callback summarizes once every epoch. period (int): by default the callback summarizes once every epoch.
This option (if not set to 0) makes it additionally summarize every ``period`` steps. This option (if not set to 0) makes it additionally summarize every ``period`` steps.
......
...@@ -199,6 +199,7 @@ def add_moving_summary(*args, **kwargs): ...@@ -199,6 +199,7 @@ def add_moving_summary(*args, **kwargs):
""" """
Summarize the moving average for scalar tensors. Summarize the moving average for scalar tensors.
This function is a no-op if not calling from main training tower. This function is a no-op if not calling from main training tower.
See tutorial at https://tensorpack.readthedocs.io/tutorial/summary.html
Args: Args:
args: scalar tensors to summarize args: scalar tensors to summarize
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment