Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
89c1820d
Commit
89c1820d
authored
Sep 29, 2019
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update docs
parent
b2d5877d
Changes
7
Hide whitespace changes
Inline
Side-by-side
Showing
7 changed files
with
20 additions
and
16 deletions
+20
-16
.lgtm.yml
.lgtm.yml
+2
-1
docs/conf.py
docs/conf.py
+5
-2
docs/tutorial/dataflow.md
docs/tutorial/dataflow.md
+1
-1
docs/tutorial/philosophy/dataflow.md
docs/tutorial/philosophy/dataflow.md
+3
-4
examples/FasterRCNN/modeling/model_fpn.py
examples/FasterRCNN/modeling/model_fpn.py
+2
-2
examples/FasterRCNN/train.py
examples/FasterRCNN/train.py
+3
-2
tensorpack/callbacks/inference_runner.py
tensorpack/callbacks/inference_runner.py
+4
-4
No files found.
.lgtm.yml
View file @
89c1820d
...
...
@@ -13,4 +13,5 @@ extraction:
prepare
:
packages
:
-
libcap-dev
python_setup
:
version
:
3
docs/conf.py
View file @
89c1820d
...
...
@@ -83,7 +83,10 @@ if ON_RTD:
else
:
# skip this when building locally
intersphinx_timeout
=
0.1
intersphinx_mapping
=
{
'python'
:
(
'https://docs.python.org/3.6'
,
None
)}
intersphinx_mapping
=
{
'python'
:
(
'https://docs.python.org/3.6'
,
None
),
'numpy'
:
(
'https://docs.scipy.org/doc/numpy/'
,
None
),
}
# -------------------------
# Add any paths that contain templates here, relative to this directory.
...
...
@@ -106,7 +109,7 @@ master_doc = 'index'
# General information about the project.
project
=
u'tensorpack'
copyright
=
u'2015 - 201
8
, Yuxin Wu, et al.'
copyright
=
u'2015 - 201
9
, Yuxin Wu, et al.'
author
=
u'Yuxin Wu, et al.'
# The version info for the project you're documenting, acts as replacement for
...
...
docs/tutorial/dataflow.md
View file @
89c1820d
...
...
@@ -55,7 +55,7 @@ Mappers execute a mapping function in parallel on top of an existing dataflow.
You can find details in the
[
API docs
](
../modules/dataflow.html
)
under the
"parallel" and "parallel_map" section.
[
Parallel DataFlow tutorial
](
parallel-dataflow.html
)
give a deeper dive
[
Parallel DataFlow tutorial
](
parallel-dataflow.html
)
give
s
a deeper dive
on how to use them to optimize your data pipeline.
### Run the DataFlow
...
...
docs/tutorial/philosophy/dataflow.md
View file @
89c1820d
...
...
@@ -5,8 +5,6 @@ There are many other data loading solutions for deep learning.
Here we explain why you may want to use Tensorpack DataFlow for your own good:
**it's easy, and fast (enough)**
.
Note that this article may contain subjective opinions and we're happy to hear different voices.
### How Fast Do You Actually Need?
Your data pipeline
**only needs to be fast enough**
.
...
...
@@ -42,7 +40,8 @@ And for us, we may optimize DataFlow even more, but we just haven't found the re
Certain libraries advocate for a new binary data format (e.g., TFRecords, RecordIO).
Do you need to use them?
We think you usually do not, at least not after you try DataFlow, because they are:
We think you usually do not, at least not after you try DataFlow, because these
formats are:
1.
**Not Easy**
: To use the new binary format,
you need to write a script, to process your data from its original format,
...
...
@@ -98,7 +97,7 @@ Some frameworks have also provided good framework-specific solutions for data lo
On the contrary, DataFlow is framework-agnostic: you can use it in any Python environment.
In addition to this benefit, there are other reasons you might prefer DataFlow over the alternatives:
#### tf.data
or other TF
operations
#### tf.data
and other graph
operations
The huge disadvantage of loading data in a computation graph is obvious:
__it's extremely inflexible__
.
...
...
examples/FasterRCNN/modeling/model_fpn.py
View file @
89c1820d
...
...
@@ -128,7 +128,7 @@ def multilevel_roi_align(features, rcnn_boxes, resolution):
# Unshuffle to the original order, to match the original samples
level_id_perm
=
tf
.
concat
(
level_ids
,
axis
=
0
)
# A permutation of 1~N
level_id_invert_perm
=
tf
.
invert_permutation
(
level_id_perm
)
all_rois
=
tf
.
gather
(
all_rois
,
level_id_invert_perm
)
all_rois
=
tf
.
gather
(
all_rois
,
level_id_invert_perm
,
name
=
"output"
)
return
all_rois
...
...
@@ -202,7 +202,7 @@ def generate_fpn_proposals(
# Detectron picks top-k within the batch, rather than within an image. However we do not have a batch.
proposal_topk
=
tf
.
minimum
(
tf
.
size
(
proposal_scores
),
fpn_nms_topk
)
proposal_scores
,
topk_indices
=
tf
.
nn
.
top_k
(
proposal_scores
,
k
=
proposal_topk
,
sorted
=
False
)
proposal_boxes
=
tf
.
gather
(
proposal_boxes
,
topk_indices
)
proposal_boxes
=
tf
.
gather
(
proposal_boxes
,
topk_indices
,
name
=
"all_proposals"
)
else
:
for
lvl
in
range
(
num_lvl
):
with
tf
.
name_scope
(
'Lvl{}'
.
format
(
lvl
+
2
)):
...
...
examples/FasterRCNN/train.py
View file @
89c1820d
...
...
@@ -31,8 +31,9 @@ if __name__ == '__main__':
import
multiprocessing
as
mp
mp
.
set_start_method
(
'spawn'
)
parser
=
argparse
.
ArgumentParser
()
parser
.
add_argument
(
'--load'
,
help
=
'load a model to start training from. Can overwrite BACKBONE.WEIGHTS'
)
parser
.
add_argument
(
'--logdir'
,
help
=
'log directory'
,
default
=
'train_log/maskrcnn'
)
parser
.
add_argument
(
'--load'
,
help
=
'Load a model to start training from. It overwrites BACKBONE.WEIGHTS'
)
parser
.
add_argument
(
'--logdir'
,
help
=
'Log directory. Will remove the old one if already exists.'
,
default
=
'train_log/maskrcnn'
)
parser
.
add_argument
(
'--config'
,
help
=
"A list of KEY=VALUE to overwrite those defined in config.py"
,
nargs
=
'+'
)
if
get_tf_version_tuple
()
<
(
1
,
6
):
...
...
tensorpack/callbacks/inference_runner.py
View file @
89c1820d
...
...
@@ -112,8 +112,8 @@ class InferenceRunner(InferenceRunnerBase):
input (InputSource or DataFlow): The :class:`InputSource` to run
inference on. If given a DataFlow, will use :class:`FeedInput`.
infs (list): a list of :class:`Inferencer` instances.
tower_name (str): the name scope of the tower to build.
Need to set a
different one if multiple InferenceRunner are used
.
tower_name (str): the name scope of the tower to build.
If multiple InferenceRunner are used, each needs a different tower_name
.
tower_func (tfutils.TowerFunc or None): the tower function to be used to build the graph.
By defaults to call `trainer.tower_func` under a `training=False` TowerContext,
but you can change it to a different tower function
...
...
@@ -194,8 +194,8 @@ class DataParallelInferenceRunner(InferenceRunnerBase):
Args:
input (DataFlow or QueueInput)
gpus (int or list[int]): #gpus, or list of GPU id
tower_name (str): the name scope of the tower to build.
Need to set a
different one if multiple InferenceRunner are used
.
tower_name (str): the name scope of the tower to build.
If multiple InferenceRunner are used, each needs a different tower_name
.
tower_func (tfutils.TowerFunc or None): the tower function to be used to build the graph.
The tower function will be called under a `training=False` TowerContext.
The default is `trainer.tower_func`,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment