Commit 83cd2cf1 authored by Yuxin Wu's avatar Yuxin Wu

update docs

update docs

update docs
parent 81a4fc33
## DO NOT post an issue if you're seeing this. You're at the wrong place.
__If you meet any unexpected problems when running the code, or want to report bugs, please STOP here__. Go to the
following link instead and fill out the information there:
https://github.com/tensorpack/tensorpack/issues/new?template=unexpected-problems---bugs.md
To post an issue, please:
1. Click the "New Issue" button
2. __Choose your category__!
3. __Read instructions there__!
Otherwise, you can post here for:
1. Feature Requests:
+ Note that you can implement a lot of features by extending Tensorpack
(See http://tensorpack.readthedocs.io/en/latest/tutorial/index.html#extend-tensorpack).
It does not have to be added to Tensorpack unless you have a good reason.
An issue has to be one of the following:
- Unexpected Problems / Potential Bugs
- Feature Requests
- Questions on Using/Understanding Tensorpack
2. Questions on Using/Understanding Tensorpack:
+ Your question is probably answered in [tutorials](http://tensorpack.readthedocs.io/en/latest/tutorial/index.html#user-tutorials). Read it first.
+ We answer "HOW to do X with Tensorpack" for a well-defined X.
We also answer "HOW/WHY Tensorpack does X" for some X that Tensorpack or its examples are doing.
Some typical questions that we DO NOT answer:
+ "Could you improve/implement an example/paper ?" --
We have no plans to do so. We don't consider feature
requests for examples or implement a paper for you, unless it demonstrates
some Tensorpack features not yet demonstrated in the existing examples.
If you don't know how to do something yourself, you may ask a usage question.
+ "The examples do not perform well after I change the models/dataset/parameters/etc."
Tensorpack maintainers make sure the examples perform well without modification.
But it's your job to make sure the model and parameters is suitable in your own situation.
We do not help with such questions unless they appear to be a bug in tensorpack.
+ "Why my model doesn't work?", "I don't understand this paper you implement."
We do not answer machine learning questions.
You can also use gitter (https://gitter.im/tensorpack/users) for more casual discussions.
......@@ -8,7 +8,7 @@ __PLEASE ALWAYS INCLUDE__:
1. What you did:
+ If you're using examples:
+ What's the command you run:
+ Have you made any changes to code? Paste them if any:
+ Have you made any changes to the examples? Paste them if any:
+ If not, tell us what you did that may be relevant.
But we may not investigate it if there is no reproducible code.
+ Better to paste what you did instead of describing them.
......
......@@ -5,6 +5,9 @@ queries:
- exclude: py/import-and-import-from
- exclude: py/similar-function
- exclude: py/unused-local-variable
# buggy: https://discuss.lgtm.com/t/python-false-positive-about-super/1330/3
- exclude: py/super-not-enclosing-class
- exclude: py/unreachable-statement
extraction:
python:
prepare:
......
......@@ -95,7 +95,7 @@ sonnet/Keras manages the variable scope by their own model classes, and calling
always creates new variable scope. See the [Keras example](../examples/keras) for how to use it within tensorpack.
```eval_rst
.. note:: **It's best to not trust others' layers!**.
.. note:: **It's best to not trust others' layers!**
For non-standard layers that's not included in TensorFlow or Tensorpack, it's best to implement them yourself.
Non-standard layers often do not have a mathematical definition that people
......
......@@ -500,6 +500,9 @@ if __name__ == '__main__':
if get_tf_version_tuple() < (1, 6):
# https://github.com/tensorflow/tensorflow/issues/14657
logger.warn("TF<1.6 has a bug which may lead to crash in FasterRCNN if you're unlucky.")
if get_tf_version_tuple() == (1, 11):
# https://github.com/tensorflow/tensorflow/issues/22750
logger.warn("TF=1.11 has a bug which leads to crash in inference.")
args = parser.parse_args()
if args.config:
......
......@@ -101,19 +101,19 @@ class MergeAllSummaries_RunWithOp(Callback):
def MergeAllSummaries(period=0, run_alone=False, key=tf.GraphKeys.SUMMARIES):
"""
This callback is enabled by default.
Evaluate all summaries by `tf.summary.merge_all`, and write to logs.
Evaluate all summaries by `tf.summary.merge_all`, and write them to logs.
Args:
period (int): by default the callback summarizes once every epoch.
This option (if not set to 0) makes it additionally summarize every ``period`` steps.
run_alone (bool): whether to evaluate the summaries alone.
If True, summaries will be evaluated after each epoch alone.
If False, summaries will be evaluated together with other
If False, summaries will be evaluated together with the
`sess.run` calls, in the last step of each epoch.
For :class:`SimpleTrainer`, it needs to be False because summary may
depend on inputs.
key (str): the collection of summary tensors. Same as in `tf.summary.merge_all`.
Default is ``tf.GraphKeys.SUMMARIES``
Default is ``tf.GraphKeys.SUMMARIES``.
"""
period = int(period)
if run_alone:
......
......@@ -105,6 +105,8 @@ def BatchNorm(inputs, axis=None, training=None, momentum=0.9, epsilon=1e-5,
This implementation averages the per-GPU E[x] and E[x^2] among GPUs to compute
global mean & variance. Therefore each GPU needs to have the same batch size.
It will match the BatchNorm layer on each GPU by its name (`BatchNorm('name', input)`).
If names do not match, the operation will hang.
This option has no effect when not training.
......
......@@ -152,6 +152,7 @@ class SyncMultiGPUTrainerReplicated(SingleCostTrainer):
Supported values: ['nccl', 'hierarchical', 'cpu'].
Default to pick automatically by heuristics.
These modes may have slight (within 5%) differences in speed.
use_nccl: deprecated option
"""
self.devices = gpus
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment