Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
83695c0b
Commit
83695c0b
authored
Jun 01, 2018
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
clean-up many old deprecations
parent
4158eb7e
Changes
15
Show whitespace changes
Inline
Side-by-side
Showing
15 changed files
with
67 additions
and
150 deletions
+67
-150
docs/conf.py
docs/conf.py
+7
-9
examples/DisturbLabel/README.md
examples/DisturbLabel/README.md
+5
-3
examples/DisturbLabel/disturb.py
examples/DisturbLabel/disturb.py
+2
-1
examples/DisturbLabel/mnist-disturb.py
examples/DisturbLabel/mnist-disturb.py
+4
-4
examples/DisturbLabel/svhn-disturb.py
examples/DisturbLabel/svhn-disturb.py
+16
-14
examples/DoReFa-Net/svhn-digit-dorefa.py
examples/DoReFa-Net/svhn-digit-dorefa.py
+2
-3
examples/ImageNetModels/inception-bn.py
examples/ImageNetModels/inception-bn.py
+3
-1
tensorpack/dataflow/imgaug/deform.py
tensorpack/dataflow/imgaug/deform.py
+5
-1
tensorpack/models/fc.py
tensorpack/models/fc.py
+12
-2
tensorpack/models/nonlin.py
tensorpack/models/nonlin.py
+1
-17
tensorpack/models/softmax.py
tensorpack/models/softmax.py
+0
-36
tensorpack/tfutils/sessinit.py
tensorpack/tfutils/sessinit.py
+1
-1
tensorpack/tfutils/symbolic_functions.py
tensorpack/tfutils/symbolic_functions.py
+8
-52
tensorpack/tfutils/varmanip.py
tensorpack/tfutils/varmanip.py
+1
-1
tensorpack/train/tower.py
tensorpack/train/tower.py
+0
-5
No files found.
docs/conf.py
View file @
83695c0b
# -*- coding: utf-8 -*-
# -*- coding: utf-8 -*-
#
#
flake8: noqa
# tensorpack documentation build configuration file, created by
# tensorpack documentation build configuration file, created by
# sphinx-quickstart on Sun Mar 27 01:41:24 2016.
# sphinx-quickstart on Sun Mar 27 01:41:24 2016.
#
#
...
@@ -92,8 +92,8 @@ master_doc = 'index'
...
@@ -92,8 +92,8 @@ master_doc = 'index'
# General information about the project.
# General information about the project.
project
=
u'tensorpack'
project
=
u'tensorpack'
copyright
=
u'2015 - 201
7, Yuxin Wu
'
copyright
=
u'2015 - 201
8, Yuxin Wu, et al.
'
author
=
u'Yuxin Wu'
author
=
u'Yuxin Wu
, et al.
'
# The version info for the project you're documenting, acts as replacement for
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# |version| and |release|, also used in various other places throughout the
...
@@ -365,12 +365,10 @@ def autodoc_skip_member(app, what, name, obj, skip, options):
...
@@ -365,12 +365,10 @@ def autodoc_skip_member(app, what, name, obj, skip, options):
# Hide some names that are deprecated or not intended to be used
# Hide some names that are deprecated or not intended to be used
if
name
in
[
if
name
in
[
# deprecated stuff:
# deprecated stuff:
'GaussianDeform'
,
'set_tower_func'
,
'TryResumeTraining'
,
'TryResumeTraining'
,
'QueueInputTrainer'
,
# renamed stuff:
# renamed stuff:
'dump_chkpt_vars'
,
'DumpTensor'
,
'DumpTensor'
,
'DumpParamAsImage'
,
'DumpParamAsImage'
,
'StagingInputWrapper'
,
'StagingInputWrapper'
,
...
@@ -378,9 +376,9 @@ def autodoc_skip_member(app, what, name, obj, skip, options):
...
@@ -378,9 +376,9 @@ def autodoc_skip_member(app, what, name, obj, skip, options):
'get_nr_gpu'
,
'get_nr_gpu'
,
# deprecated or renamed symbolic code
# deprecated or renamed symbolic code
'Deconv2D'
,
'LeakyReLU'
,
'Deconv2D'
,
'
saliency_map'
,
'
get_scalar_var'
,
'psnr'
,
'get_scalar_var'
,
'psnr'
,
'prediction_incorrect'
,
'huber_loss'
,
'SoftMax'
'prediction_incorrect'
,
'huber_loss'
,
# internal only
# internal only
'apply_default_prefetch'
,
'apply_default_prefetch'
,
...
...
examples/DisturbLabel/README.md
View file @
83695c0b
...
@@ -3,7 +3,8 @@
...
@@ -3,7 +3,8 @@
I ran into the paper
[
DisturbLabel: Regularizing CNN on the Loss Layer
](
https://arxiv.org/abs/1605.00055
)
on CVPR16,
I ran into the paper
[
DisturbLabel: Regularizing CNN on the Loss Layer
](
https://arxiv.org/abs/1605.00055
)
on CVPR16,
which basically said that noisy data gives you better performance.
which basically said that noisy data gives you better performance.
As many, I didn't believe the method and the results.
As many, I didn't believe the method and the results. This code exists to
disprove the results in the paper.
This is a simple mnist training script with DisturbLabel. It uses the simple architecture in the paper, and
This is a simple mnist training script with DisturbLabel. It uses the simple architecture in the paper, and
hyperparameters in my original
[
mnist example
](
../mnist-convnet.py
)
.
hyperparameters in my original
[
mnist example
](
../mnist-convnet.py
)
.
...
@@ -21,6 +22,7 @@ The method didn't work for slightly harder problems such as SVHN:
...
@@ -21,6 +22,7 @@ The method didn't work for slightly harder problems such as SVHN:


The SVHN experiements used the model & hyperparemeters as my original
[
svhn example
](
../svhn-digit-convnet.py
)
.
The SVHN experiements used the model & hyperparemeters as my original
[
svhn example
](
../svhn-digit-convnet.py
)
.
Experiements were all repeated 10 times to get the error bar.
Experiements were all repeated 10 times to get the error bar.
It apparently does not work.
And I don't believe it will work for ImageNet either. And that's a CVPR paper..
It will not work for ImageNet either. There is indeed a terribly weak
ImageNet experiment in this paper, and that's a CVPR paper.
examples/DisturbLabel/disturb.py
View file @
83695c0b
...
@@ -11,7 +11,8 @@ class DisturbLabel(ProxyDataFlow, RNGDataFlow):
...
@@ -11,7 +11,8 @@ class DisturbLabel(ProxyDataFlow, RNGDataFlow):
self
.
prob
=
prob
self
.
prob
=
prob
def
reset_state
(
self
):
def
reset_state
(
self
):
super
(
DisturbLabel
,
self
)
.
reset_state
()
RNGDataFlow
.
reset_state
(
self
)
ProxyDataFlow
.
reset_state
(
self
)
def
get_data
(
self
):
def
get_data
(
self
):
for
dp
in
self
.
ds
.
get_data
():
for
dp
in
self
.
ds
.
get_data
():
...
...
examples/DisturbLabel/mnist-disturb.py
View file @
83695c0b
#!/usr/bin/env python
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# -*- coding: utf-8 -*-
# File: mnist-disturb.py
# File: mnist-disturb.py
# Author: Yuxin Wu
import
os
import
os
import
argparse
import
argparse
from
tensorpack
import
*
from
tensorpack
import
*
from
tensorpack.utils
import
logger
from
tensorpack.dataflow
import
dataset
from
tensorpack.dataflow
import
dataset
import
tensorflow
as
tf
import
tensorflow
as
tf
from
disturb
import
DisturbLabel
from
disturb
import
DisturbLabel
import
imp
import
imp
mnist_example
=
imp
.
load_source
(
'mnist_example'
,
mnist_example
=
imp
.
load_source
(
'mnist_example'
,
os
.
path
.
join
(
os
.
path
.
dirname
(
__file__
),
'..'
,
'mnist-convnet.py'
))
os
.
path
.
join
(
os
.
path
.
dirname
(
__file__
),
'..'
,
'
basics'
,
'
mnist-convnet.py'
))
get_config
=
mnist_example
.
get_config
get_config
=
mnist_example
.
get_config
...
@@ -25,7 +25,6 @@ def get_data():
...
@@ -25,7 +25,6 @@ def get_data():
mnist_example
.
get_data
=
get_data
mnist_example
.
get_data
=
get_data
IMAGE_SIZE
=
28
class
Model
(
mnist_example
.
Model
):
class
Model
(
mnist_example
.
Model
):
...
@@ -41,7 +40,7 @@ class Model(mnist_example.Model):
...
@@ -41,7 +40,7 @@ class Model(mnist_example.Model):
.
FullyConnected
(
'fc1'
,
out_dim
=
10
,
activation
=
tf
.
identity
)())
.
FullyConnected
(
'fc1'
,
out_dim
=
10
,
activation
=
tf
.
identity
)())
tf
.
nn
.
softmax
(
logits
,
name
=
'prob'
)
tf
.
nn
.
softmax
(
logits
,
name
=
'prob'
)
wrong
=
symbolic_functions
.
prediction_incorrect
(
logits
,
label
)
wrong
=
tf
.
cast
(
tf
.
logical_not
(
tf
.
nn
.
in_top_k
(
logits
,
label
,
1
)),
tf
.
float32
,
name
=
'incorrect_vector'
)
add_moving_summary
(
tf
.
reduce_mean
(
wrong
,
name
=
'train_error'
))
add_moving_summary
(
tf
.
reduce_mean
(
wrong
,
name
=
'train_error'
))
cost
=
tf
.
nn
.
sparse_softmax_cross_entropy_with_logits
(
logits
=
logits
,
labels
=
label
)
cost
=
tf
.
nn
.
sparse_softmax_cross_entropy_with_logits
(
logits
=
logits
,
labels
=
label
)
...
@@ -60,5 +59,6 @@ if __name__ == '__main__':
...
@@ -60,5 +59,6 @@ if __name__ == '__main__':
if
args
.
gpu
:
if
args
.
gpu
:
os
.
environ
[
'CUDA_VISIBLE_DEVICES'
]
=
args
.
gpu
os
.
environ
[
'CUDA_VISIBLE_DEVICES'
]
=
args
.
gpu
logger
.
auto_set_dir
()
config
=
get_config
()
config
=
get_config
()
launch_train_with_config
(
config
,
SimpleTrainer
())
launch_train_with_config
(
config
,
SimpleTrainer
())
examples/DisturbLabel/svhn-disturb.py
View file @
83695c0b
#!/usr/bin/env python
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# -*- coding: utf-8 -*-
# File: svhn-disturb.py
# File: svhn-disturb.py
# Author: Yuxin Wu
import
argparse
import
argparse
import
os
import
os
...
@@ -9,14 +8,15 @@ import imp
...
@@ -9,14 +8,15 @@ import imp
from
tensorpack
import
*
from
tensorpack
import
*
from
tensorpack.utils
import
logger
from
tensorpack.dataflow
import
dataset
from
tensorpack.dataflow
import
dataset
from
disturb
import
DisturbLabel
from
disturb
import
DisturbLabel
svhn_example
=
imp
.
load_source
(
'svhn_example'
,
svhn_example
=
imp
.
load_source
(
'svhn_example'
,
os
.
path
.
join
(
os
.
path
.
dirname
(
__file__
),
'..'
,
'svhn-digit-convnet.py'
))
os
.
path
.
join
(
os
.
path
.
dirname
(
__file__
),
'..'
,
'basics'
,
'svhn-digit-convnet.py'
))
Model
=
svhn_example
.
Model
Model
=
svhn_example
.
Model
get_config
=
svhn_example
.
get_config
def
get_data
():
def
get_data
():
...
@@ -41,19 +41,21 @@ def get_data():
...
@@ -41,19 +41,21 @@ def get_data():
return
data_train
,
data_test
return
data_train
,
data_test
svhn_example
.
get_data
=
get_data
if
__name__
==
'__main__'
:
if
__name__
==
'__main__'
:
parser
=
argparse
.
ArgumentParser
()
parser
=
argparse
.
ArgumentParser
()
parser
.
add_argument
(
'--gpu'
,
help
=
'a gpu to use'
)
parser
.
add_argument
(
'--prob'
,
help
=
'disturb prob'
,
type
=
float
,
required
=
True
)
parser
.
add_argument
(
'--prob'
,
help
=
'disturb prob'
,
type
=
float
,
required
=
True
)
args
=
parser
.
parse_args
()
args
=
parser
.
parse_args
()
if
args
.
gpu
:
logger
.
auto_set_dir
()
os
.
environ
[
'CUDA_VISIBLE_DEVICES'
]
=
args
.
gpu
data_train
,
data_test
=
get_data
()
else
:
config
=
TrainConfig
(
os
.
environ
[
'CUDA_VISIBLE_DEVICES'
]
=
'0'
model
=
Model
(),
data
=
QueueInput
(
data_train
),
config
=
get_config
(
args
.
prob
)
callbacks
=
[
ModelSaver
(),
InferenceRunner
(
data_test
,
ScalarStats
([
'cost'
,
'accuracy'
]))
],
max_epoch
=
350
,
)
launch_train_with_config
(
config
,
SimpleTrainer
())
launch_train_with_config
(
config
,
SimpleTrainer
())
examples/DoReFa-Net/svhn-digit-dorefa.py
View file @
83695c0b
...
@@ -6,7 +6,6 @@
...
@@ -6,7 +6,6 @@
import
argparse
import
argparse
from
tensorpack
import
*
from
tensorpack
import
*
from
tensorpack.tfutils.symbolic_functions
import
prediction_incorrect
from
tensorpack.tfutils.summary
import
add_moving_summary
,
add_param_summary
from
tensorpack.tfutils.summary
import
add_moving_summary
,
add_param_summary
from
tensorpack.dataflow
import
dataset
from
tensorpack.dataflow
import
dataset
from
tensorpack.tfutils.varreplace
import
remap_variables
from
tensorpack.tfutils.varreplace
import
remap_variables
...
@@ -109,7 +108,7 @@ class Model(ModelDesc):
...
@@ -109,7 +108,7 @@ class Model(ModelDesc):
tf
.
nn
.
softmax
(
logits
,
name
=
'output'
)
tf
.
nn
.
softmax
(
logits
,
name
=
'output'
)
# compute the number of failed samples
# compute the number of failed samples
wrong
=
prediction_incorrect
(
logits
,
label
)
wrong
=
tf
.
cast
(
tf
.
logical_not
(
tf
.
nn
.
in_top_k
(
logits
,
label
,
1
)),
tf
.
float32
,
name
=
'wrong_tensor'
)
# monitor training error
# monitor training error
add_moving_summary
(
tf
.
reduce_mean
(
wrong
,
name
=
'train_error'
))
add_moving_summary
(
tf
.
reduce_mean
(
wrong
,
name
=
'train_error'
))
...
@@ -163,7 +162,7 @@ def get_config():
...
@@ -163,7 +162,7 @@ def get_config():
callbacks
=
[
callbacks
=
[
ModelSaver
(),
ModelSaver
(),
InferenceRunner
(
data_test
,
InferenceRunner
(
data_test
,
[
ScalarStats
(
'cost'
),
ClassificationError
()])
[
ScalarStats
(
'cost'
),
ClassificationError
(
'wrong_tensor'
)])
],
],
model
=
Model
(),
model
=
Model
(),
max_epoch
=
200
,
max_epoch
=
200
,
...
...
examples/ImageNetModels/inception-bn.py
View file @
83695c0b
...
@@ -9,7 +9,6 @@ import tensorflow as tf
...
@@ -9,7 +9,6 @@ import tensorflow as tf
from
tensorpack
import
*
from
tensorpack
import
*
from
tensorpack.tfutils.symbolic_functions
import
prediction_incorrect
from
tensorpack.tfutils.summary
import
add_moving_summary
from
tensorpack.tfutils.summary
import
add_moving_summary
from
tensorpack.dataflow
import
dataset
from
tensorpack.dataflow
import
dataset
from
tensorpack.utils.gpu
import
get_nr_gpu
from
tensorpack.utils.gpu
import
get_nr_gpu
...
@@ -99,6 +98,9 @@ class Model(ModelDesc):
...
@@ -99,6 +98,9 @@ class Model(ModelDesc):
cost
=
tf
.
add_n
([
loss3
,
0.3
*
loss2
,
0.3
*
loss1
],
name
=
'weighted_cost'
)
cost
=
tf
.
add_n
([
loss3
,
0.3
*
loss2
,
0.3
*
loss1
],
name
=
'weighted_cost'
)
add_moving_summary
([
cost
,
loss1
,
loss2
,
loss3
])
add_moving_summary
([
cost
,
loss1
,
loss2
,
loss3
])
def
prediction_incorrect
(
logits
,
label
,
topk
,
name
):
return
tf
.
cast
(
tf
.
logical_not
(
tf
.
nn
.
in_top_k
(
logits
,
label
,
topk
)),
tf
.
float32
,
name
=
name
)
wrong
=
prediction_incorrect
(
logits
,
label
,
1
,
name
=
'wrong-top1'
)
wrong
=
prediction_incorrect
(
logits
,
label
,
1
,
name
=
'wrong-top1'
)
add_moving_summary
(
tf
.
reduce_mean
(
wrong
,
name
=
'train_error_top1'
))
add_moving_summary
(
tf
.
reduce_mean
(
wrong
,
name
=
'train_error_top1'
))
...
...
tensorpack/dataflow/imgaug/deform.py
View file @
83695c0b
...
@@ -6,7 +6,11 @@ from .base import ImageAugmentor
...
@@ -6,7 +6,11 @@ from .base import ImageAugmentor
from
...utils
import
logger
from
...utils
import
logger
import
numpy
as
np
import
numpy
as
np
__all__
=
[
'GaussianDeform'
]
__all__
=
[]
# Code was temporarily kept here for a future reference in case someone needs it
# But it was already deprecated,
# because this augmentation is not a general one that people will often find helpful.
class
GaussianMap
(
object
):
class
GaussianMap
(
object
):
...
...
tensorpack/models/fc.py
View file @
83695c0b
...
@@ -3,14 +3,24 @@
...
@@ -3,14 +3,24 @@
import
tensorflow
as
tf
import
tensorflow
as
tf
import
numpy
as
np
from
.common
import
layer_register
,
VariableHolder
from
.common
import
layer_register
,
VariableHolder
from
.tflayer
import
convert_to_tflayer_args
,
rename_get_variable
from
.tflayer
import
convert_to_tflayer_args
,
rename_get_variable
from
..tfutils
import
symbolic_functions
as
symbf
__all__
=
[
'FullyConnected'
]
__all__
=
[
'FullyConnected'
]
def
batch_flatten
(
x
):
"""
Flatten the tensor except the first dimension.
"""
shape
=
x
.
get_shape
()
.
as_list
()[
1
:]
if
None
not
in
shape
:
return
tf
.
reshape
(
x
,
[
-
1
,
int
(
np
.
prod
(
shape
))])
return
tf
.
reshape
(
x
,
tf
.
stack
([
tf
.
shape
(
x
)[
0
],
-
1
]))
@
layer_register
(
log_shape
=
True
)
@
layer_register
(
log_shape
=
True
)
@
convert_to_tflayer_args
(
@
convert_to_tflayer_args
(
args_names
=
[
'units'
],
args_names
=
[
'units'
],
...
@@ -36,7 +46,7 @@ def FullyConnected(
...
@@ -36,7 +46,7 @@ def FullyConnected(
* ``b``: bias
* ``b``: bias
"""
"""
inputs
=
symbf
.
batch_flatten
(
inputs
)
inputs
=
batch_flatten
(
inputs
)
with
rename_get_variable
({
'kernel'
:
'W'
,
'bias'
:
'b'
}):
with
rename_get_variable
({
'kernel'
:
'W'
,
'bias'
:
'b'
}):
layer
=
tf
.
layers
.
Dense
(
layer
=
tf
.
layers
.
Dense
(
units
=
units
,
units
=
units
,
...
...
tensorpack/models/nonlin.py
View file @
83695c0b
...
@@ -6,9 +6,8 @@ import tensorflow as tf
...
@@ -6,9 +6,8 @@ import tensorflow as tf
from
.common
import
layer_register
,
VariableHolder
from
.common
import
layer_register
,
VariableHolder
from
.batch_norm
import
BatchNorm
from
.batch_norm
import
BatchNorm
from
..utils.develop
import
log_deprecated
__all__
=
[
'Maxout'
,
'PReLU'
,
'
LeakyReLU'
,
'
BNReLU'
]
__all__
=
[
'Maxout'
,
'PReLU'
,
'BNReLU'
]
@
layer_register
(
use_scope
=
None
)
@
layer_register
(
use_scope
=
None
)
...
@@ -60,21 +59,6 @@ def PReLU(x, init=0.001, name='output'):
...
@@ -60,21 +59,6 @@ def PReLU(x, init=0.001, name='output'):
return
ret
return
ret
@
layer_register
(
use_scope
=
None
)
def
LeakyReLU
(
x
,
alpha
,
name
=
'output'
):
"""
Leaky ReLU as in paper `Rectifier Nonlinearities Improve Neural Network Acoustic
Models
<http://ai.stanford.edu/~amaas/papers/relu_hybrid_icml2013_final.pdf>`_.
Args:
x (tf.Tensor): input
alpha (float): the slope.
"""
log_deprecated
(
"LeakyReLU"
,
"Use tf.nn.leaky_relu in TF 1.4 instead!"
,
"2018-03-30"
)
return
tf
.
maximum
(
x
,
alpha
*
x
,
name
=
name
)
@
layer_register
(
use_scope
=
None
)
@
layer_register
(
use_scope
=
None
)
def
BNReLU
(
x
,
name
=
None
):
def
BNReLU
(
x
,
name
=
None
):
"""
"""
...
...
tensorpack/models/softmax.py
deleted
100644 → 0
View file @
4158eb7e
# -*- coding: utf-8 -*- File: softmax.py
import
tensorflow
as
tf
from
.common
import
layer_register
from
..utils.develop
import
log_deprecated
__all__
=
[
'SoftMax'
]
@
layer_register
(
use_scope
=
None
)
def
SoftMax
(
x
,
use_temperature
=
False
,
temperature_init
=
1.0
):
"""
A SoftMax layer (w/o linear projection) with optional temperature, as
defined in the paper `Distilling the Knowledge in a Neural Network
<https://arxiv.org/abs/1503.02531>`_.
Args:
x (tf.Tensor): input of any dimension. Softmax will be performed on
the last dimension.
use_temperature (bool): use a learnable temperature or not.
temperature_init (float): initial value of the temperature.
Returns:
tf.Tensor: a tensor of the same shape named ``output``.
Variable Names:
* ``invtemp``: 1.0/temperature.
"""
log_deprecated
(
"models.SoftMax"
,
"Please implement it by yourself!"
,
"2018-05-01"
)
if
use_temperature
:
t
=
tf
.
get_variable
(
'invtemp'
,
[],
initializer
=
tf
.
constant_initializer
(
1.0
/
float
(
temperature_init
)))
x
=
x
*
t
return
tf
.
nn
.
softmax
(
x
,
name
=
'output'
)
tensorpack/tfutils/sessinit.py
View file @
83695c0b
...
@@ -262,7 +262,7 @@ def get_model_loader(filename):
...
@@ -262,7 +262,7 @@ def get_model_loader(filename):
return
SaverRestore
(
filename
)
return
SaverRestore
(
filename
)
@
deprecated
(
"
Write the logic yourself or use AutoResumeTrainConfig!"
,
"2018-06
-01"
)
@
deprecated
(
"
It's better to write the logic yourself or use AutoResumeTrainConfig!"
,
"2018-07
-01"
)
def
TryResumeTraining
():
def
TryResumeTraining
():
"""
"""
Try loading latest checkpoint from ``logger.get_logger_dir()``, only if there is one.
Try loading latest checkpoint from ``logger.get_logger_dir()``, only if there is one.
...
...
tensorpack/tfutils/symbolic_functions.py
View file @
83695c0b
...
@@ -7,15 +7,15 @@ import numpy as np
...
@@ -7,15 +7,15 @@ import numpy as np
from
..utils.develop
import
deprecated
from
..utils.develop
import
deprecated
# __all__ = ['get_scalar_var
']
__all__
=
[
'get_scalar_var'
,
'prediction_incorrect'
,
'flatten'
,
'batch_flatten'
,
'print_stat'
,
'rms'
,
'huber_loss
'
]
# this function exists for backwards-compatibility
# this function exists for backwards-compatibility
def
prediction_incorrect
(
logits
,
label
,
topk
=
1
,
name
=
'incorrect_vector'
):
def
prediction_incorrect
(
logits
,
label
,
topk
=
1
,
name
=
'incorrect_vector'
):
return
tf
.
cast
(
tf
.
logical_not
(
tf
.
nn
.
in_top_k
(
logits
,
label
,
topk
)),
return
tf
.
cast
(
tf
.
logical_not
(
tf
.
nn
.
in_top_k
(
logits
,
label
,
topk
)),
tf
.
float32
,
name
=
name
)
tf
.
float32
,
name
=
name
)
@
deprecated
(
"Please implement it yourself!"
,
"2018-08-01"
)
def
flatten
(
x
):
def
flatten
(
x
):
"""
"""
Flatten the tensor.
Flatten the tensor.
...
@@ -23,6 +23,7 @@ def flatten(x):
...
@@ -23,6 +23,7 @@ def flatten(x):
return
tf
.
reshape
(
x
,
[
-
1
])
return
tf
.
reshape
(
x
,
[
-
1
])
@
deprecated
(
"Please implement it yourself!"
,
"2018-08-01"
)
def
batch_flatten
(
x
):
def
batch_flatten
(
x
):
"""
"""
Flatten the tensor except the first dimension.
Flatten the tensor except the first dimension.
...
@@ -46,6 +47,8 @@ def print_stat(x, message=None):
...
@@ -46,6 +47,8 @@ def print_stat(x, message=None):
message
=
message
,
name
=
'print_'
+
x
.
op
.
name
)
message
=
message
,
name
=
'print_'
+
x
.
op
.
name
)
# after deprecated, keep it for internal use only
# @deprecated("Please implement it yourself!", "2018-08-01")
def
rms
(
x
,
name
=
None
):
def
rms
(
x
,
name
=
None
):
"""
"""
Returns:
Returns:
...
@@ -58,7 +61,7 @@ def rms(x, name=None):
...
@@ -58,7 +61,7 @@ def rms(x, name=None):
return
tf
.
sqrt
(
tf
.
reduce_mean
(
tf
.
square
(
x
)),
name
=
name
)
return
tf
.
sqrt
(
tf
.
reduce_mean
(
tf
.
square
(
x
)),
name
=
name
)
@
deprecated
(
"Please use tf.losses.huber_loss instead!"
)
@
deprecated
(
"Please use tf.losses.huber_loss instead!"
,
"2018-08-01"
)
def
huber_loss
(
x
,
delta
=
1
,
name
=
'huber_loss'
):
def
huber_loss
(
x
,
delta
=
1
,
name
=
'huber_loss'
):
r"""
r"""
Huber loss of x.
Huber loss of x.
...
@@ -88,6 +91,7 @@ def huber_loss(x, delta=1, name='huber_loss'):
...
@@ -88,6 +91,7 @@ def huber_loss(x, delta=1, name='huber_loss'):
# TODO deprecate this in the future
# TODO deprecate this in the future
# doesn't hurt to keep it here for now
# doesn't hurt to keep it here for now
@
deprecated
(
"Simply use tf.get_variable instead!"
,
"2018-08-01"
)
def
get_scalar_var
(
name
,
init_value
,
summary
=
False
,
trainable
=
False
):
def
get_scalar_var
(
name
,
init_value
,
summary
=
False
,
trainable
=
False
):
"""
"""
Get a scalar float variable with certain initial value.
Get a scalar float variable with certain initial value.
...
@@ -142,51 +146,3 @@ def psnr(prediction, ground_truth, maxp=None, name='psnr'):
...
@@ -142,51 +146,3 @@ def psnr(prediction, ground_truth, maxp=None, name='psnr'):
psnr
=
tf
.
add
(
tf
.
multiply
(
20.
,
log10
(
maxp
)),
psnr
,
name
=
name
)
psnr
=
tf
.
add
(
tf
.
multiply
(
20.
,
log10
(
maxp
)),
psnr
,
name
=
name
)
return
psnr
return
psnr
@
deprecated
(
"Please implement it by yourself."
,
"2018-04-28"
)
def
saliency_map
(
output
,
input
,
name
=
"saliency_map"
):
"""
Produce a saliency map as described in the paper:
`Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
<https://arxiv.org/abs/1312.6034>`_.
The saliency map is the gradient of the max element in output w.r.t input.
Returns:
tf.Tensor: the saliency map. Has the same shape as input.
"""
max_outp
=
tf
.
reduce_max
(
output
,
1
)
saliency_op
=
tf
.
gradients
(
max_outp
,
input
)[:][
0
]
saliency_op
=
tf
.
identity
(
saliency_op
,
name
=
name
)
return
saliency_op
@
deprecated
(
"Please implement it by yourself."
,
"2018-04-28"
)
def
shapeless_placeholder
(
x
,
axis
,
name
):
"""
Make the static shape of a tensor less specific.
If you want to feed to a tensor, the shape of the feed value must match
the tensor's static shape. This function creates a placeholder which
defaults to x if not fed, but has a less specific static shape than x.
See also `tensorflow#5680
<https://github.com/tensorflow/tensorflow/issues/5680>`_.
Args:
x: a tensor
axis(int or list of ints): these axes of ``x.get_shape()`` will become
None in the output.
name(str): name of the output tensor
Returns:
a tensor equal to x, but shape information is partially cleared.
"""
shp
=
x
.
get_shape
()
.
as_list
()
if
not
isinstance
(
axis
,
list
):
axis
=
[
axis
]
for
a
in
axis
:
if
shp
[
a
]
is
None
:
raise
ValueError
(
"Axis {} of shape {} is already unknown!"
.
format
(
a
,
shp
))
shp
[
a
]
=
None
x
=
tf
.
placeholder_with_default
(
x
,
shape
=
shp
,
name
=
name
)
return
x
tensorpack/tfutils/varmanip.py
View file @
83695c0b
...
@@ -11,7 +11,7 @@ from ..utils.develop import deprecated
...
@@ -11,7 +11,7 @@ from ..utils.develop import deprecated
from
..utils
import
logger
from
..utils
import
logger
from
.common
import
get_op_tensor_name
from
.common
import
get_op_tensor_name
__all__
=
[
'SessionUpdate'
,
'dump_session_params'
,
'dump_chkpt_vars'
,
__all__
=
[
'SessionUpdate'
,
'dump_session_params'
,
'load_chkpt_vars'
,
'save_chkpt_vars'
,
'get_checkpoint_path'
]
'load_chkpt_vars'
,
'save_chkpt_vars'
,
'get_checkpoint_path'
]
...
...
tensorpack/train/tower.py
View file @
83695c0b
...
@@ -6,7 +6,6 @@ import six
...
@@ -6,7 +6,6 @@ import six
from
abc
import
abstractmethod
,
ABCMeta
from
abc
import
abstractmethod
,
ABCMeta
from
..utils.argtools
import
call_only_once
,
memoized
from
..utils.argtools
import
call_only_once
,
memoized
from
..utils.develop
import
deprecated
from
..graph_builder.predict
import
SimplePredictBuilder
from
..graph_builder.predict
import
SimplePredictBuilder
from
..input_source
import
PlaceholderInput
from
..input_source
import
PlaceholderInput
from
..predict.base
import
OnlinePredictor
from
..predict.base
import
OnlinePredictor
...
@@ -37,10 +36,6 @@ class TowerTrainer(Trainer):
...
@@ -37,10 +36,6 @@ class TowerTrainer(Trainer):
assert
isinstance
(
tower_func
,
TowerFuncWrapper
),
tower_func
assert
isinstance
(
tower_func
,
TowerFuncWrapper
),
tower_func
self
.
_tower_func
=
tower_func
self
.
_tower_func
=
tower_func
@
deprecated
(
"Just use tower_func = xxx instead!"
,
"2018-06-01"
)
def
set_tower_func
(
self
,
tower_func
):
self
.
_set_tower_func
(
tower_func
)
@
property
@
property
def
tower_func
(
self
):
def
tower_func
(
self
):
"""
"""
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment