Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
7167cf6d
Commit
7167cf6d
authored
May 09, 2018
by
Bohumír Zámečník
Committed by
Yuxin Wu
May 08, 2018
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Fix some typos. (#752)
parent
3c4777ae
Changes
18
Hide whitespace changes
Inline
Side-by-side
Showing
18 changed files
with
33 additions
and
33 deletions
+33
-33
docs/casestudies/colorize.md
docs/casestudies/colorize.md
+1
-1
tensorpack/contrib/keras.py
tensorpack/contrib/keras.py
+2
-2
tensorpack/dataflow/common.py
tensorpack/dataflow/common.py
+2
-2
tensorpack/dataflow/dataset/ilsvrc.py
tensorpack/dataflow/dataset/ilsvrc.py
+1
-1
tensorpack/dataflow/dftools.py
tensorpack/dataflow/dftools.py
+2
-2
tensorpack/dataflow/format.py
tensorpack/dataflow/format.py
+3
-3
tensorpack/dataflow/imgaug/base.py
tensorpack/dataflow/imgaug/base.py
+2
-2
tensorpack/dataflow/imgaug/convert.py
tensorpack/dataflow/imgaug/convert.py
+3
-3
tensorpack/dataflow/imgaug/deform.py
tensorpack/dataflow/imgaug/deform.py
+1
-1
tensorpack/dataflow/imgaug/noise.py
tensorpack/dataflow/imgaug/noise.py
+3
-3
tensorpack/dataflow/imgaug/paste.py
tensorpack/dataflow/imgaug/paste.py
+1
-1
tensorpack/dataflow/parallel.py
tensorpack/dataflow/parallel.py
+3
-3
tensorpack/dataflow/parallel_map.py
tensorpack/dataflow/parallel_map.py
+2
-2
tensorpack/dataflow/raw.py
tensorpack/dataflow/raw.py
+1
-1
tensorpack/graph_builder/distributed.py
tensorpack/graph_builder/distributed.py
+1
-1
tensorpack/graph_builder/training.py
tensorpack/graph_builder/training.py
+2
-2
tensorpack/input_source/input_source.py
tensorpack/input_source/input_source.py
+2
-2
tensorpack/utils/utils.py
tensorpack/utils/utils.py
+1
-1
No files found.
docs/casestudies/colorize.md
View file @
7167cf6d
...
...
@@ -122,7 +122,7 @@ def get_data():
ds
=
MapData
(
ds
,
lambda
dp
:
[
cv2
.
cvtColor
(
dp
[
0
],
cv2
.
COLOR_RGB2Lab
)[:,:,
0
],
dp
[
0
]])
ds
=
BatchData
(
ds
,
32
)
ds
=
PrefetchData
(
ds
,
4
)
# use queue size 4
ds
=
PrintData
(
ds
,
num
=
2
)
# only for debug
ds
=
PrintData
(
ds
,
num
=
2
)
# only for debug
return
ds
```
...
...
tensorpack/contrib/keras.py
View file @
7167cf6d
...
...
@@ -123,7 +123,7 @@ def setup_keras_trainer(
get_model (input1, input2, ... -> keras.model.Model):
Takes tensors and returns a Keras model. Will be part of the tower function.
input (InputSource):
optimizer (tf.t
ar
in.Optimizer):
optimizer (tf.t
ra
in.Optimizer):
loss, metrics: list of strings
"""
assert
isinstance
(
optimizer
,
tf
.
train
.
Optimizer
),
optimizer
...
...
@@ -213,7 +213,7 @@ class KerasModel(object):
if
nr_gpu
<=
1
:
trainer
=
SimpleTrainer
()
else
:
# the default multigpu trainer
# the default multi
-
gpu trainer
trainer
=
SyncMultiGPUTrainerParameterServer
(
nr_gpu
)
assert
isinstance
(
trainer
,
Trainer
),
trainer
assert
not
isinstance
(
trainer
,
DistributedTrainerBase
)
...
...
tensorpack/dataflow/common.py
View file @
7167cf6d
...
...
@@ -84,7 +84,7 @@ class BatchData(ProxyDataFlow):
remainder (bool): When the remaining datapoints in ``ds`` is not
enough to form a batch, whether or not to also produce the remaining
data as a smaller batch.
If set to False, all produced datapoints are guranteed to have the same batch size.
If set to False, all produced datapoints are gu
a
ranteed to have the same batch size.
If set to True, `ds.size()` must be accurate.
use_list (bool): if True, each component will contain a list
of datapoints instead of an numpy array of an extra dimension.
...
...
@@ -706,7 +706,7 @@ class PrintData(ProxyDataFlow):
Args:
entry: the datapoint component
k (int): index of this compo
en
nt in current datapoint
k (int): index of this compo
ne
nt in current datapoint
depth (int, optional): recursion depth
max_depth, max_list: same as in :meth:`__init__`.
...
...
tensorpack/dataflow/dataset/ilsvrc.py
View file @
7167cf6d
...
...
@@ -191,7 +191,7 @@ class ILSVRC12(ILSVRC12Files):
dir_structure (str): One of 'original' or 'train'.
The directory structure for the 'val' directory.
'original' means the original decompressed directory, which only has list of image files (as below).
If set to 'train', it expects the same two-level directory structure simlar to 'dir/train/'.
If set to 'train', it expects the same two-level directory structure sim
i
lar to 'dir/train/'.
By default, it tries to automatically detect the structure.
You probably do not need to care about this option because 'original' is what people usually have.
...
...
tensorpack/dataflow/dftools.py
View file @
7167cf6d
...
...
@@ -84,8 +84,8 @@ def dump_dataflow_to_lmdb(df, lmdb_path, write_frequency=5000):
with
get_tqdm
(
total
=
sz
)
as
pbar
:
idx
=
-
1
#
lmdb
transaction is not exception-safe!
# although it has a contextmanager interface
#
LMDB
transaction is not exception-safe!
# although it has a context
manager interface
txn
=
db
.
begin
(
write
=
True
)
for
idx
,
dp
in
enumerate
(
df
.
get_data
()):
txn
.
put
(
u'{}'
.
format
(
idx
)
.
encode
(
'ascii'
),
dumps
(
dp
))
...
...
tensorpack/dataflow/format.py
View file @
7167cf6d
...
...
@@ -242,7 +242,7 @@ def CaffeLMDB(lmdb_path, shuffle=True, keys=None):
class
SVMLightData
(
RNGDataFlow
):
""" Read X,y from a
svm
light file, and produce [X_i, y_i] pairs. """
""" Read X,y from a
n SVM
light file, and produce [X_i, y_i] pairs. """
def
__init__
(
self
,
filename
,
shuffle
=
True
):
"""
...
...
@@ -275,9 +275,9 @@ class TFRecordData(DataFlow):
def
__init__
(
self
,
path
,
size
=
None
):
"""
Args:
path (str): path to the
tfr
ecord file
path (str): path to the
TFR
ecord file
size (int): total number of records, because this metadata is not
stored in the
tfr
ecord file.
stored in the
TFR
ecord file.
"""
self
.
_path
=
path
self
.
_size
=
int
(
size
)
...
...
tensorpack/dataflow/imgaug/base.py
View file @
7167cf6d
...
...
@@ -43,7 +43,7 @@ class Augmentor(object):
"""
Returns:
augmented data
augmentaion params
augmenta
t
ion params
"""
return
self
.
_augment_return_params
(
d
)
...
...
@@ -84,7 +84,7 @@ class Augmentor(object):
"""
try
:
argspec
=
inspect
.
getargspec
(
self
.
__init__
)
assert
argspec
.
varargs
is
None
,
"The default __repr__ doesn't work for vaargs!"
assert
argspec
.
varargs
is
None
,
"The default __repr__ doesn't work for va
r
args!"
assert
argspec
.
keywords
is
None
,
"The default __repr__ doesn't work for kwargs!"
fields
=
argspec
.
args
[
1
:]
index_field_has_default
=
len
(
fields
)
-
(
0
if
argspec
.
defaults
is
None
else
len
(
argspec
.
defaults
))
...
...
tensorpack/dataflow/imgaug/convert.py
View file @
7167cf6d
...
...
@@ -10,13 +10,13 @@ __all__ = ['ColorSpace', 'Grayscale', 'ToUint8', 'ToFloat32']
class
ColorSpace
(
ImageAugmentor
):
""" Convert into another colorspace. """
""" Convert into another color
space. """
def
__init__
(
self
,
mode
,
keepdims
=
True
):
"""
Args:
mode:
opencv color
space conversion code (e.g., `cv2.COLOR_BGR2HSV`)
keepdims (bool): keep the dimension of image unchanged if
opencv
mode:
OpenCV color
space conversion code (e.g., `cv2.COLOR_BGR2HSV`)
keepdims (bool): keep the dimension of image unchanged if
OpenCV
changes it.
"""
self
.
_init
(
locals
())
...
...
tensorpack/dataflow/imgaug/deform.py
View file @
7167cf6d
...
...
@@ -10,7 +10,7 @@ __all__ = ['GaussianDeform']
class
GaussianMap
(
object
):
""" Generate
g
aussian weighted deformation map"""
""" Generate
G
aussian weighted deformation map"""
# TODO really needs speedup
def
__init__
(
self
,
image_shape
,
sigma
=
0.5
):
...
...
tensorpack/dataflow/imgaug/noise.py
View file @
7167cf6d
...
...
@@ -10,12 +10,12 @@ __all__ = ['JpegNoise', 'GaussianNoise', 'SaltPepperNoise']
class
JpegNoise
(
ImageAugmentor
):
""" Random J
peg
noise. """
""" Random J
PEG
noise. """
def
__init__
(
self
,
quality_range
=
(
40
,
100
)):
"""
Args:
quality_range (tuple): range to sample J
peg
quality
quality_range (tuple): range to sample J
PEG
quality
"""
super
(
JpegNoise
,
self
)
.
__init__
()
self
.
_init
(
locals
())
...
...
@@ -54,7 +54,7 @@ class GaussianNoise(ImageAugmentor):
class
SaltPepperNoise
(
ImageAugmentor
):
""" Salt and pepper noise.
Randomly set some elements in im
g
to 0 or 255, regardless of its channels.
Randomly set some elements in im
age
to 0 or 255, regardless of its channels.
"""
def
__init__
(
self
,
white_prob
=
0.05
,
black_prob
=
0.05
):
...
...
tensorpack/dataflow/imgaug/paste.py
View file @
7167cf6d
...
...
@@ -84,7 +84,7 @@ class CenterPaste(ImageAugmentor):
class
RandomPaste
(
CenterPaste
):
"""
Randomly paste the image onto a background c
o
nvas.
Randomly paste the image onto a background c
a
nvas.
"""
def
_get_augment_params
(
self
,
img
):
...
...
tensorpack/dataflow/parallel.py
View file @
7167cf6d
...
...
@@ -101,7 +101,7 @@ class _MultiProcessZMQDataFlow(DataFlow):
return
self
.
_reset_done
=
True
# __del__ not guranteed to get called at exit
# __del__ not gu
a
ranteed to get called at exit
atexit
.
register
(
del_weakref
,
weakref
.
ref
(
self
))
self
.
_reset_once
()
# build processes
...
...
@@ -134,7 +134,7 @@ class MultiProcessPrefetchData(ProxyDataFlow):
process by a Python :class:`multiprocessing.Queue`.
Note:
1. An iterator cannot run faster automatically -- what's happen
n
ing is
1. An iterator cannot run faster automatically -- what's happening is
that the underlying dataflow will be forked ``nr_proc`` times.
As a result, we have the following guarantee on the dataflow correctness:
...
...
@@ -215,7 +215,7 @@ class PrefetchDataZMQ(_MultiProcessZMQDataFlow):
and collect datapoints from the given dataflow in each process by ZeroMQ IPC pipe.
Note:
1. An iterator cannot run faster automatically -- what's happen
n
ing is
1. An iterator cannot run faster automatically -- what's happening is
that the underlying dataflow will be forked ``nr_proc`` times.
As a result, we have the following guarantee on the dataflow correctness:
...
...
tensorpack/dataflow/parallel_map.py
View file @
7167cf6d
...
...
@@ -104,7 +104,7 @@ class MultiThreadMapData(_ParallelMapData):
mixed with datapoints from the next pass.
You can use **strict mode**, where `MultiThreadMapData.get_data()`
is guranteed to produce the exact set which `df.get_data()`
is gu
a
ranteed to produce the exact set which `df.get_data()`
produces. Although the order of data still isn't preserved.
"""
class
_Worker
(
StoppableThread
):
...
...
@@ -212,7 +212,7 @@ class MultiProcessMapDataZMQ(_ParallelMapData, _MultiProcessZMQDataFlow):
mixed with datapoints from the next pass.
You can use **strict mode**, where `MultiProcessMapData.get_data()`
is guranteed to produce the exact set which `df.get_data()`
is gu
a
ranteed to produce the exact set which `df.get_data()`
produces. Although the order of data still isn't preserved.
"""
class
_Worker
(
mp
.
Process
):
...
...
tensorpack/dataflow/raw.py
View file @
7167cf6d
...
...
@@ -119,7 +119,7 @@ class DataFromGenerator(DataFlow):
class
DataFromIterable
(
DataFlow
):
""" Wrap an iterable of datapoi
tn
s to a DataFlow"""
""" Wrap an iterable of datapoi
nt
s to a DataFlow"""
def
__init__
(
self
,
iterable
):
"""
Args:
...
...
tensorpack/graph_builder/distributed.py
View file @
7167cf6d
...
...
@@ -142,7 +142,7 @@ class DistributedReplicatedBuilder(DataParallelBuilder, DistributedBuilderBase):
It is an equivalent of ``--variable_update=distributed_replicated`` in
`tensorflow/benchmarks <https://github.com/tensorflow/benchmarks>`_.
Note that the performance of this tr
ia
ner is still not satisfactory.
Note that the performance of this tr
ai
ner is still not satisfactory.
Check `ResNet-Horovod <https://github.com/tensorpack/benchmarks/tree/master/ResNet-Horovod>`_
for fast and correct distributed examples.
...
...
tensorpack/graph_builder/training.py
View file @
7167cf6d
...
...
@@ -111,7 +111,7 @@ class SyncMultiGPUParameterServerBuilder(DataParallelBuilder):
"""
Data-parallel training in 'ParameterServer' mode.
It builds one tower on each GPU with
shared variable scope. It synchron
o
izes the gradients computed
shared variable scope. It synchronizes the gradients computed
from each tower, averages them and applies to the shared variables.
It is an equivalent of ``--variable_update=parameter_server`` in
...
...
@@ -178,7 +178,7 @@ class SyncMultiGPUReplicatedBuilder(DataParallelBuilder):
Attribute:
grads: #GPU number of lists of (g, v). Synchronized gradients on each device, available after build()
Though on different devi
ec
s, they should contain the same value.
Though on different devi
ce
s, they should contain the same value.
"""
def
__init__
(
self
,
towers
,
average
,
mode
):
...
...
tensorpack/input_source/input_source.py
View file @
7167cf6d
...
...
@@ -351,7 +351,7 @@ class DummyConstantInput(TensorInput):
def
__init__
(
self
,
shapes
):
"""
Args:
shapes (list[list]): a list of fully-s
ep
cified shapes.
shapes (list[list]): a list of fully-s
pe
cified shapes.
"""
self
.
shapes
=
shapes
logger
.
warn
(
"Using dummy input for debug!"
)
...
...
@@ -372,7 +372,7 @@ class DummyConstantInput(TensorInput):
class
ZMQInput
(
TensorInput
):
"""
Rec
v
tensors from a ZMQ endpoint, with ops from https://github.com/tensorpack/zmq_ops.
Rec
eive
tensors from a ZMQ endpoint, with ops from https://github.com/tensorpack/zmq_ops.
It works with :meth:`dataflow.remote.send_dataflow_zmq(format='zmq_op')`.
"""
def
__init__
(
self
,
end_point
,
hwm
,
bind
=
True
):
...
...
tensorpack/utils/utils.py
View file @
7167cf6d
...
...
@@ -134,7 +134,7 @@ _EXECUTE_HISTORY = set()
def
execute_only_once
():
"""
Each called in the code to this function is guranteed to return True the
Each called in the code to this function is gu
a
ranteed to return True the
first time and False afterwards.
Returns:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment