Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
4d7f0018
Commit
4d7f0018
authored
May 06, 2018
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update docs
parent
712fd299
Changes
7
Hide whitespace changes
Inline
Side-by-side
Showing
7 changed files
with
15 additions
and
11 deletions
+15
-11
docs/conf.py
docs/conf.py
+1
-0
docs/tutorial/trainer.md
docs/tutorial/trainer.md
+2
-2
examples/DoReFa-Net/README.md
examples/DoReFa-Net/README.md
+1
-1
tensorpack/__init__.py
tensorpack/__init__.py
+2
-5
tensorpack/train/__init__.py
tensorpack/train/__init__.py
+1
-0
tensorpack/utils/concurrency.py
tensorpack/utils/concurrency.py
+3
-1
tensorpack/utils/gpu.py
tensorpack/utils/gpu.py
+5
-2
No files found.
docs/conf.py
View file @
4d7f0018
...
@@ -375,6 +375,7 @@ def autodoc_skip_member(app, what, name, obj, skip, options):
...
@@ -375,6 +375,7 @@ def autodoc_skip_member(app, what, name, obj, skip, options):
'DumpParamAsImage'
,
'DumpParamAsImage'
,
'StagingInputWrapper'
,
'StagingInputWrapper'
,
'PeriodicRunHooks'
,
'PeriodicRunHooks'
,
'get_nr_gpu'
,
# deprecated or renamed symbolic code
# deprecated or renamed symbolic code
'Deconv2D'
,
'LeakyReLU'
,
'Deconv2D'
,
'LeakyReLU'
,
...
...
docs/tutorial/trainer.md
View file @
4d7f0018
...
@@ -28,8 +28,8 @@ The tower function needs to follow some conventions:
...
@@ -28,8 +28,8 @@ The tower function needs to follow some conventions:
1.
__It might get called multiple times__ for data-parallel training or inference.
1.
__It might get called multiple times__ for data-parallel training or inference.
2.
It has to respect variable collections:
2.
It has to respect variable collections:
*
Only put variables __trainable by gradient descent__ into
`TRAINABLE_VARIABLES`
.
*
(Required)
Only put variables __trainable by gradient descent__ into
`TRAINABLE_VARIABLES`
.
*
Put variables that need to be saved
into
`MODEL_VARIABLES`
.
*
(Recommended) Put non-trainable variables that need to be used in inference
into
`MODEL_VARIABLES`
.
3.
It has to respect variable scopes:
3.
It has to respect variable scopes:
*
The name of any trainable variables created in the function must be like "variable_scope_name/custom/name".
*
The name of any trainable variables created in the function must be like "variable_scope_name/custom/name".
Don't depend on name_scope's name. Don't use variable_scope's name twice.
Don't depend on name_scope's name. Don't use variable_scope's name twice.
...
...
examples/DoReFa-Net/README.md
View file @
4d7f0018
...
@@ -7,7 +7,7 @@ It also contains an implementation of the following papers:
...
@@ -7,7 +7,7 @@ It also contains an implementation of the following papers:
+
[
Trained Ternary Quantization
](
https://arxiv.org/abs/1612.01064
)
, with (W,A,G)=(t,32,32).
+
[
Trained Ternary Quantization
](
https://arxiv.org/abs/1612.01064
)
, with (W,A,G)=(t,32,32).
+
[
Binarized Neural Networks
](
https://arxiv.org/abs/1602.02830
)
, with (W,A,G)=(1,1,32).
+
[
Binarized Neural Networks
](
https://arxiv.org/abs/1602.02830
)
, with (W,A,G)=(1,1,32).
These
different quantization techniques achieves the following accuracy
in this implementation:
These
quantization techniques achieves the following ImageNet performance
in this implementation:
| Model | W,A,G | Top 1 Error |
| Model | W,A,G | Top 1 Error |
|:-------------------|-------------|------------:|
|:-------------------|-------------|------------:|
...
...
tensorpack/__init__.py
View file @
4d7f0018
# -*- coding: utf-8 -*-
# -*- coding: utf-8 -*-
# File: __init__.py
# File: __init__.py
# flake8: noqa
import
os
as
_os
import
os
as
_os
...
@@ -21,11 +22,7 @@ if STATICA_HACK:
...
@@ -21,11 +22,7 @@ if STATICA_HACK:
from
tensorpack.callbacks
import
*
from
tensorpack.callbacks
import
*
from
tensorpack.tfutils
import
*
from
tensorpack.tfutils
import
*
# Default to v2
from
tensorpack.train
import
*
if
_os
.
environ
.
get
(
'TENSORPACK_TRAIN_API'
,
'v2'
)
==
'v2'
:
from
tensorpack.train
import
*
else
:
from
tensorpack.trainv1
import
*
from
tensorpack.graph_builder
import
InputDesc
,
ModelDesc
,
ModelDescBase
from
tensorpack.graph_builder
import
InputDesc
,
ModelDesc
,
ModelDescBase
from
tensorpack.input_source
import
*
from
tensorpack.input_source
import
*
from
tensorpack.predict
import
*
from
tensorpack.predict
import
*
tensorpack/train/__init__.py
View file @
4d7f0018
# -*- coding: utf-8 -*-
# -*- coding: utf-8 -*-
# File: __init__.py
# File: __init__.py
# flake8: noqa
# https://github.com/celery/kombu/blob/7d13f9b95d0b50c94393b962e6def928511bfda6/kombu/__init__.py#L34-L36
# https://github.com/celery/kombu/blob/7d13f9b95d0b50c94393b962e6def928511bfda6/kombu/__init__.py#L34-L36
STATICA_HACK
=
True
STATICA_HACK
=
True
...
...
tensorpack/utils/concurrency.py
View file @
4d7f0018
...
@@ -178,10 +178,12 @@ def enable_death_signal():
...
@@ -178,10 +178,12 @@ def enable_death_signal():
in case the parent dies accidentally.
in case the parent dies accidentally.
"""
"""
try
:
try
:
import
prctl
import
prctl
# pip install prctl-python
except
ImportError
:
except
ImportError
:
return
return
else
:
else
:
assert
hasattr
(
prctl
,
'set_pdeathsig'
),
\
"prctl.set_pdeathsig does not exist! Note that you need to install 'prctl-python' instead of 'prctl'."
# is SIGHUP a good choice?
# is SIGHUP a good choice?
prctl
.
set_pdeathsig
(
signal
.
SIGHUP
)
prctl
.
set_pdeathsig
(
signal
.
SIGHUP
)
...
...
tensorpack/utils/gpu.py
View file @
4d7f0018
...
@@ -8,7 +8,7 @@ from . import logger
...
@@ -8,7 +8,7 @@ from . import logger
from
.nvml
import
NVMLContext
from
.nvml
import
NVMLContext
from
.concurrency
import
subproc_call
from
.concurrency
import
subproc_call
__all__
=
[
'change_gpu'
,
'get_nr_gpu'
]
__all__
=
[
'change_gpu'
,
'get_nr_gpu'
,
'get_num_gpu'
]
def
change_gpu
(
val
):
def
change_gpu
(
val
):
...
@@ -22,7 +22,7 @@ def change_gpu(val):
...
@@ -22,7 +22,7 @@ def change_gpu(val):
return
change_env
(
'CUDA_VISIBLE_DEVICES'
,
val
)
return
change_env
(
'CUDA_VISIBLE_DEVICES'
,
val
)
def
get_n
r
_gpu
():
def
get_n
um
_gpu
():
"""
"""
Returns:
Returns:
int: #available GPUs in CUDA_VISIBLE_DEVICES, or in the system.
int: #available GPUs in CUDA_VISIBLE_DEVICES, or in the system.
...
@@ -47,3 +47,6 @@ def get_nr_gpu():
...
@@ -47,3 +47,6 @@ def get_nr_gpu():
from
tensorflow.python.client
import
device_lib
from
tensorflow.python.client
import
device_lib
local_device_protos
=
device_lib
.
list_local_devices
()
local_device_protos
=
device_lib
.
list_local_devices
()
return
len
([
x
.
name
for
x
in
local_device_protos
if
x
.
device_type
==
'GPU'
])
return
len
([
x
.
name
for
x
in
local_device_protos
if
x
.
device_type
==
'GPU'
])
get_nr_gpu
=
get_num_gpu
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment