Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
20338134
Commit
20338134
authored
Mar 28, 2017
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
tutorials
parent
da8899e7
Changes
6
Hide whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
53 additions
and
23 deletions
+53
-23
docs/requirements.txt
docs/requirements.txt
+3
-1
docs/tutorial/callback.md
docs/tutorial/callback.md
+4
-4
docs/tutorial/dataflow.md
docs/tutorial/dataflow.md
+8
-7
docs/tutorial/glance.md
docs/tutorial/glance.md
+0
-7
docs/tutorial/index.rst
docs/tutorial/index.rst
+25
-3
docs/tutorial/model.md
docs/tutorial/model.md
+13
-1
No files found.
docs/requirements.txt
View file @
20338134
...
@@ -3,5 +3,7 @@ numpy
...
@@ -3,5 +3,7 @@ numpy
tqdm
tqdm
decorator
decorator
tensorflow
tensorflow
Sphinx
=
=1.5.1
Sphinx
>
=1.5.1
recommonmark==0.4.0
recommonmark==0.4.0
sphinx_rtd_theme
mock
docs/tutorial/callback.md
View file @
20338134
# Callback
# Callback
s
Apart from the actual training iterations that minimizes the cost,
Apart from the actual training iterations that minimizes the cost,
you almost surely would like to do something else during training.
you almost surely would like to do something else during training.
...
@@ -40,12 +40,12 @@ TrainConfig(
...
@@ -40,12 +40,12 @@ TrainConfig(
'val-error-top1'
)
'val-error-top1'
)
],
],
extra_callbacks
=
[
# these callbacks are enabled by default already
extra_callbacks
=
[
# these callbacks are enabled by default already
# maintain and summarize moving average of some tensors (e.g. training loss, training error)
# maintain and summarize moving average of some tensors
defined in the model
(e.g. training loss, training error)
MovingAverageSummary
(),
MovingAverageSummary
(),
# draw a nice progress bar
# draw a nice progress bar
ProgressBar
(),
ProgressBar
(),
# run `tf.summary.merge_all` and save results every epoch
# run `tf.summary.merge_all` and save results every epoch
MergeAllSummaries
(),
MergeAllSummaries
(),
]
]
)
)
```
```
docs/tutorial/dataflow.md
View file @
20338134
...
@@ -38,11 +38,16 @@ without worrying about adding operators to TensorFlow.
...
@@ -38,11 +38,16 @@ without worrying about adding operators to TensorFlow.
In the mean time, thanks to the prefetching, it can still run fast enough for
In the mean time, thanks to the prefetching, it can still run fast enough for
tasks as large as ImageNet training.
tasks as large as ImageNet training.
Unless you're working with standard data types (image folders, LMDB, etc),
you would usually want to write your own DataFlow.
See
[
another tutorial
](
http://tensorpack.readthedocs.io/en/latest/tutorial/extend/dataflow.html
)
for details.
<!--
<!--
-
TODO mention RL, distributed data, and zmq operator in the future.
-
TODO mention RL, distributed data, and zmq operator in the future.
-->
-->
###
Reuse in other frameworks
###
Use DataFlow outside Tensorpack
Another good thing about DataFlow is that it is independent of
Another good thing about DataFlow is that it is independent of
tensorpack internals. You can just use it as an efficient data processing pipeline,
tensorpack internals. You can just use it as an efficient data processing pipeline,
and plug it into other frameworks.
and plug it into other frameworks.
...
@@ -50,14 +55,10 @@ and plug it into other frameworks.
...
@@ -50,14 +55,10 @@ and plug it into other frameworks.
To use a DataFlow independently, you'll need to call
`reset_state()`
first to initialize it,
To use a DataFlow independently, you'll need to call
`reset_state()`
first to initialize it,
and then use the generator however you want:
and then use the generator however you want:
```
python
```
python
df
=
get_some_df
()
df
=
SomeDataFlow
()
df
.
reset_state
()
df
.
reset_state
()
generator
=
df
.
get_data
()
generator
=
df
.
get_data
()
for
dp
in
generator
:
for
dp
in
generator
:
# dp is now a list. do whatever
# dp is now a list. do whatever
```
```
Unless you're working with standard data types (image folders, LMDB, etc),
you would usually want to write your own DataFlow.
See
[
another tutorial
](
http://tensorpack.readthedocs.io/en/latest/tutorial/extend/dataflow.html
)
for details.
docs/tutorial/glance.md
deleted
100644 → 0
View file @
da8899e7
## A High-Level Glance
TBD.
You're recommended to glance at the
[
examples
](
../examples
)
to get a feeling about the
structure.
docs/tutorial/index.rst
View file @
20338134
...
@@ -2,9 +2,31 @@
...
@@ -2,9 +2,31 @@
Tutorials
Tutorials
---------------------
---------------------
To
be
completed
.
A
High
Level
Glance
====================
user
tutorials
*
:
doc
:`
dataflow
`
is
a
set
of
extensible
tools
to
help
you
define
your
input
data
with
ease
and
speed
.
It
provides
a
uniformed
interface
so
data
processing
modules
can
be
chained
together
.
It
allows
you
to
load
and
process
your
data
in
pure
Python
and
accelerate
it
by
multiprocess
prefetch
.
See
also
:
doc
:`
tf
-
queue
`
and
:
doc
:`
efficient
-
dataflow
`
for
more
details
about
efficiency
of
data
processing
.
*
You
can
use
any
TF
-
based
symbolic
function
library
to
define
a
model
in
tensorpack
.
:
doc
:`
model
`
introduces
where
and
how
you
define
the
model
for
tensorpack
trainers
to
use
,
and
how
you
can
benefit
from
the
symbolic
function
library
in
tensorpack
.
Both
DataFlow
and
models
can
be
used
outside
tensorpack
,
as
just
a
data
processing
library
and
a
symbolic
function
library
.
Tensopack
trainers
integrate
these
two
components
and
add
more
convenient
features
.
*
tensorpack
:
doc
:`
trainer
`
manages
the
training
loops
for
you
so
you
won
't have to worry about
details such as multi-GPU training. At the same time it keeps the power of customization to you
through callbacks.
* :doc:`callback` are like ``tf.train.SessionRunHook`` plugins, or extensions. During training,
everything you want to do other than the main iterations can be defined through callbacks.
User Tutorials
========================
========================
.. toctree::
.. toctree::
...
@@ -17,7 +39,7 @@ user tutorials
...
@@ -17,7 +39,7 @@ user tutorials
trainer
trainer
callback
callback
extend
t
ensorpack
Extend T
ensorpack
=================
=================
.. toctree::
.. toctree::
...
...
docs/tutorial/model.md
View file @
20338134
...
@@ -28,7 +28,7 @@ such as conv/deconv, fc, batch normalization, pooling layers, and some custom lo
...
@@ -28,7 +28,7 @@ such as conv/deconv, fc, batch normalization, pooling layers, and some custom lo
Using the tensorpack implementations, you can also benefit from
`argscope`
and
`LinearWrap`
to
Using the tensorpack implementations, you can also benefit from
`argscope`
and
`LinearWrap`
to
simplify the code.
simplify the code.
## argscope and LinearWrap
##
#
argscope and LinearWrap
`argscope`
gives you a context with default arguments.
`argscope`
gives you a context with default arguments.
`LinearWrap`
allows you to simplify "linear structure" models by
`LinearWrap`
allows you to simplify "linear structure" models by
adding the layers one by one.
adding the layers one by one.
...
@@ -59,3 +59,15 @@ l = tf.multiply(l, 0.5)
...
@@ -59,3 +59,15 @@ l = tf.multiply(l, 0.5)
l = func(l, *args, **kwargs)
l = func(l, *args, **kwargs)
l = FullyConnected('fc1', l, 10, nl=tf.identity)
l = FullyConnected('fc1', l, 10, nl=tf.identity)
```
```
### Use Models outside Tensorpack
You can use the tensorpack models alone as a simple symbolic function library, and write your own
training code instead of using tensorpack trainers.
To do this, just enter a
[
TowerContext
](
http://tensorpack.readthedocs.io/en/latest/modules/tfutils.html#tensorpack.tfutils.TowerContext
)
when you define your model:
```
python
with
TowerContext
(
''
,
is_training
=
True
):
# call any tensorpack symbolic functions
```
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment