Commit e51323c1 authored by Yuxin Wu's avatar Yuxin Wu

add docs about logger (fix #998)

parent 0e5c83b5
...@@ -385,6 +385,7 @@ _DEPRECATED_NAMES = set([ ...@@ -385,6 +385,7 @@ _DEPRECATED_NAMES = set([
'PeriodicRunHooks', 'PeriodicRunHooks',
'get_nr_gpu', 'get_nr_gpu',
'start_test', # TestDataSpeed 'start_test', # TestDataSpeed
'ThreadedMapData',
# deprecated or renamed symbolic code # deprecated or renamed symbolic code
'ImageSample', 'ImageSample',
......
...@@ -141,7 +141,7 @@ class MultiProcessPrefetchData(ProxyDataFlow): ...@@ -141,7 +141,7 @@ class MultiProcessPrefetchData(ProxyDataFlow):
This implies that there will be duplication, reordering, etc. This implies that there will be duplication, reordering, etc.
You probably only want to use it for training. You probably only want to use it for training.
For example, if your original dataflow produced the same first datapoint, For example, if your original dataflow contains no randomness and produces the same first datapoint,
then after parallel prefetching, the datapoint will be produced ``nr_proc`` times then after parallel prefetching, the datapoint will be produced ``nr_proc`` times
at the beginning. at the beginning.
Even when your original dataflow is fully shuffled, you still need to be aware of the Even when your original dataflow is fully shuffled, you still need to be aware of the
...@@ -238,7 +238,7 @@ class PrefetchDataZMQ(_MultiProcessZMQDataFlow): ...@@ -238,7 +238,7 @@ class PrefetchDataZMQ(_MultiProcessZMQDataFlow):
This implies that there will be duplication, reordering, etc. This implies that there will be duplication, reordering, etc.
You probably only want to use it for training. You probably only want to use it for training.
For example, if your original dataflow produced the same first datapoint, For example, if your original dataflow contains no randomness and produces the same first datapoint,
then after parallel prefetching, the datapoint will be produced ``nr_proc`` times then after parallel prefetching, the datapoint will be produced ``nr_proc`` times
at the beginning. at the beginning.
Even when your original dataflow is fully shuffled, you still need to be aware of the Even when your original dataflow is fully shuffled, you still need to be aware of the
...@@ -386,7 +386,7 @@ class MultiThreadPrefetchData(DataFlow): ...@@ -386,7 +386,7 @@ class MultiThreadPrefetchData(DataFlow):
This implies that there will be duplication, reordering, etc. This implies that there will be duplication, reordering, etc.
You probably only want to use it for training. You probably only want to use it for training.
For example, if your original dataflow produced the same first datapoint, For example, if your original dataflow contains no randomness and produces the same first datapoint,
then after parallel prefetching, the datapoint will be produced ``nr_thread`` times then after parallel prefetching, the datapoint will be produced ``nr_thread`` times
at the beginning. at the beginning.
Even when your original dataflow is fully shuffled, you still need to be aware of the Even when your original dataflow is fully shuffled, you still need to be aware of the
......
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
# File: logger.py # File: logger.py
"""
The logger module itself has the common logging functions of Python's
:class:`logging.Logger`. For example:
.. code-block:: python
from tensorpack.utils import logger
logger.set_logger_dir('train_log/test')
logger.info("Test")
logger.error("Error happened!")
"""
import logging import logging
import os import os
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment