Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
f644da74
Commit
f644da74
authored
Aug 13, 2017
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update docs
parent
e755ace6
Changes
6
Hide whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
28 additions
and
28 deletions
+28
-28
docs/tutorial/callback.md
docs/tutorial/callback.md
+4
-4
docs/tutorial/extend/callback.md
docs/tutorial/extend/callback.md
+6
-4
docs/tutorial/extend/dataflow.md
docs/tutorial/extend/dataflow.md
+1
-1
docs/tutorial/extend/model.md
docs/tutorial/extend/model.md
+3
-4
docs/tutorial/index.rst
docs/tutorial/index.rst
+1
-1
docs/tutorial/symbolic.md
docs/tutorial/symbolic.md
+13
-14
No files found.
docs/tutorial/callback.md
View file @
f644da74
...
@@ -44,10 +44,10 @@ TrainConfig(
...
@@ -44,10 +44,10 @@ TrainConfig(
-d type=note -d title="validation error"
\\
-d type=note -d title="validation error"
\\
-d body={val-error-top1} > /dev/null 2>&1'
,
-d body={val-error-top1} > /dev/null 2>&1'
,
'val-error-top1'
),
'val-error-top1'
),
# record GPU utilizations during training
# record GPU utilizations during training
GPUUtilizationTracker
(),
GPUUtilizationTracker
(),
# can pause the training and start a debug shell, to observe what's going on
# can pause the training and start a debug shell, to observe what's going on
InjectShell
(
shell
=
'ipython'
)
InjectShell
(
shell
=
'ipython'
)
],
],
extra_callbacks
=
[
# these callbacks are enabled by default already
extra_callbacks
=
[
# these callbacks are enabled by default already
# maintain and summarize moving average of some tensors defined in the model (e.g. training loss, training error)
# maintain and summarize moving average of some tensors defined in the model (e.g. training loss, training error)
...
...
docs/tutorial/extend/callback.md
View file @
f644da74
...
@@ -46,7 +46,7 @@ Usually some finalization work.
...
@@ -46,7 +46,7 @@ Usually some finalization work.
*
`_before_epoch(self)`
,
`_after_epoch(self)`
*
`_before_epoch(self)`
,
`_after_epoch(self)`
Use them
only
when you really need something to happen __immediately__ before/after an epoch.
Use them
__only__
when you really need something to happen __immediately__ before/after an epoch.
Otherwise,
`_trigger_epoch`
should be enough.
Otherwise,
`_trigger_epoch`
should be enough.
*
`_before_run(self, ctx)`
,
`_after_run(self, ctx, values)`
*
`_before_run(self, ctx)`
,
`_after_run(self, ctx, values)`
...
@@ -78,9 +78,11 @@ Do something after each epoch has finished. Will call `self.trigger()` by defaul
...
@@ -78,9 +78,11 @@ Do something after each epoch has finished. Will call `self.trigger()` by defaul
*
`_trigger(self)`
*
`_trigger(self)`
By default will get called by
`_trigger_epoch`
,
Define something to do here without knowing how often it will get called.
but you can customize the scheduling of this callback by
By default it will get called by
`_trigger_epoch`
,
`PeriodicTrigger`
, to let this method run every k steps or every k epochs.
but you can customize the scheduling of this method by
[
`PeriodicTrigger`
](
http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.PeriodicTrigger
)
,
to let this method run every k steps or every k epochs.
### What you can do in the callback
### What you can do in the callback
...
...
docs/tutorial/extend/dataflow.md
View file @
f644da74
...
@@ -15,7 +15,7 @@ class MyDataFlow(DataFlow):
...
@@ -15,7 +15,7 @@ class MyDataFlow(DataFlow):
yield
[
digit
,
label
]
yield
[
digit
,
label
]
```
```
Optionally,
DataFlow
can implement the following two methods:
Optionally,
you
can implement the following two methods:
+
`size()`
. Return the number of elements the generator can produce. Certain tensorpack features might require this.
+
`size()`
. Return the number of elements the generator can produce. Certain tensorpack features might require this.
...
...
docs/tutorial/extend/model.md
View file @
f644da74
...
@@ -4,14 +4,13 @@
...
@@ -4,14 +4,13 @@
The first thing to note: __you never have to write a layer__.
The first thing to note: __you never have to write a layer__.
Tensorpack layers are nothing but wrappers of symbolic functions.
Tensorpack layers are nothing but wrappers of symbolic functions.
You can use any symbolic functions you have written or seen elsewhere with or without tensorpack layers.
You can use any symbolic functions you have written or seen elsewhere with or without tensorpack layers.
You can use symbolic functions from slim/tflearn/tensorlayer, and even Keras/sonnet (
[
with some tricks
](
../../examples/mnist-keras.py
)
).
If you would like, you can make a symbolic function become a "layer" by following some simple rules, and then gain benefits from the framework.
If you would like, you can make a symbolic function become a "layer" by following some simple rules, and then gain benefits from the framework.
Take a look at the
[
Convolutional Layer
](
../../tensorpack/models/conv2d.py#L14
)
implementation for an example of how to define a layer:
Take a look at the
[
Convolutional Layer
](
../../tensorpack/models/conv2d.py#L14
)
implementation for an example of how to define a layer:
```
python
```
python
@
layer_register
()
@
layer_register
(
log_shape
=
True
)
def
Conv2D
(
x
,
out_channel
,
kernel_shape
,
def
Conv2D
(
x
,
out_channel
,
kernel_shape
,
padding
=
'SAME'
,
stride
=
1
,
padding
=
'SAME'
,
stride
=
1
,
W_init
=
None
,
b_init
=
None
,
W_init
=
None
,
b_init
=
None
,
...
@@ -27,9 +26,9 @@ Basically, a tensorpack layer is just a symbolic function, but with the followin
...
@@ -27,9 +26,9 @@ Basically, a tensorpack layer is just a symbolic function, but with the followin
By making a symbolic function a "layer", the following things will happen:
By making a symbolic function a "layer", the following things will happen:
+
You will need to call the function with a scope name as the first argument, e.g.
`Conv2D('conv0', x, 32, 3)`
.
+
You will need to call the function with a scope name as the first argument, e.g.
`Conv2D('conv0', x, 32, 3)`
.
Everything happening in this function will be under the variable scope
'conv0'
.
Everything happening in this function will be under the variable scope
`conv0`
.
You can register the layer with
`use_scope=False`
to disable this feature.
You can register the layer with
`use_scope=False`
to disable this feature.
+
Static shapes of input/output will be printed to screen.
+
Static shapes of input/output will be printed to screen
(if you register with
`log_shape=True`
)
.
+
`argscope`
will work for all its arguments except the input tensor(s).
+
`argscope`
will work for all its arguments except the input tensor(s).
+
It will work with
`LinearWrap`
: you can use it if the output of one layer matches the input of the next layer.
+
It will work with
`LinearWrap`
: you can use it if the output of one layer matches the input of the next layer.
...
...
docs/tutorial/index.rst
View file @
f644da74
...
@@ -45,5 +45,5 @@ Extend Tensorpack
...
@@ -45,5 +45,5 @@ Extend Tensorpack
extend/dataflow
extend/dataflow
extend/augmentor
extend/augmentor
extend/model
extend/model
extend/trainer
extend/callback
extend/callback
extend/trainer
docs/tutorial/symbolic.md
View file @
f644da74
...
@@ -12,22 +12,21 @@ In the future we may shift to `tf.layers` because they will be better maintained
...
@@ -12,22 +12,21 @@ In the future we may shift to `tf.layers` because they will be better maintained
### argscope and LinearWrap
### argscope and LinearWrap
`argscope`
gives you a context with default arguments.
`argscope`
gives you a context with default arguments.
`LinearWrap`
allows you to simplify "linear structure" models by
`LinearWrap`
is a syntax sugar to simplify building "linear structure" models.
adding the layers one by one.
The following code:
The following code:
```
python
```
python
with
argscope
(
Conv2D
,
out_channel
=
32
,
kernel_shape
=
3
,
nl
=
tf
.
nn
.
relu
):
with
argscope
(
Conv2D
,
out_channel
=
32
,
kernel_shape
=
3
,
nl
=
tf
.
nn
.
relu
):
l
=
(
LinearWrap
(
image
)
# the starting brace is only for line-breaking
l
=
(
LinearWrap
(
image
)
# the starting brace is only for line-breaking
.
Conv2D
(
'conv0'
)
.
Conv2D
(
'conv0'
)
.
MaxPooling
(
'pool0'
,
2
)
.
MaxPooling
(
'pool0'
,
2
)
.
Conv2D
(
'conv1'
,
padding
=
'SAME'
)
.
Conv2D
(
'conv1'
,
padding
=
'SAME'
)
.
Conv2D
(
'conv2'
,
kernel_shape
=
5
)
.
Conv2D
(
'conv2'
,
kernel_shape
=
5
)
.
FullyConnected
(
'fc0'
,
512
,
nl
=
tf
.
nn
.
relu
)
.
FullyConnected
(
'fc0'
,
512
,
nl
=
tf
.
nn
.
relu
)
.
Dropout
(
'dropout'
,
0.5
)
.
Dropout
(
'dropout'
,
0.5
)
.
tf
.
multiply
(
0.5
)
.
tf
.
multiply
(
0.5
)
.
apply
(
func
,
*
args
,
**
kwargs
)
.
apply
(
func
,
*
args
,
**
kwargs
)
.
FullyConnected
(
'fc1'
,
out_dim
=
10
,
nl
=
tf
.
identity
)())
.
FullyConnected
(
'fc1'
,
out_dim
=
10
,
nl
=
tf
.
identity
)())
```
```
is equivalent to:
is equivalent to:
```
```
...
@@ -49,14 +48,14 @@ To do this, just enter a [TowerContext](http://tensorpack.readthedocs.io/en/late
...
@@ -49,14 +48,14 @@ To do this, just enter a [TowerContext](http://tensorpack.readthedocs.io/en/late
when you define your model:
when you define your model:
```
python
```
python
with
TowerContext
(
''
,
is_training
=
True
):
with
TowerContext
(
''
,
is_training
=
True
):
# call any tensorpack layer
# call any tensorpack layer
```
```
Some layers (in particular
``BatchNorm``
) has different train/test time behavior which is controlled
Some layers (in particular
``BatchNorm``
) has different train/test time behavior which is controlled
by
``TowerContext``
. If you need to use the tensorpack version of them in test time, you'll need to create the ops for them under another context.
by
``TowerContext``
. If you need to use the tensorpack version of them in test time, you'll need to create the ops for them under another context.
```
python
```
python
with
tf
.
variable_scope
(
tf
.
get_variable_scope
(),
reuse
=
True
),
TowerContext
(
'predict'
,
is_training
=
False
):
with
tf
.
variable_scope
(
tf
.
get_variable_scope
(),
reuse
=
True
),
TowerContext
(
'predict'
,
is_training
=
False
):
# build the graph again
# build the graph again
```
```
### Use Other Symbolic Libraries within Tensorpack
### Use Other Symbolic Libraries within Tensorpack
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment