Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
00fdb263
Commit
00fdb263
authored
Feb 15, 2017
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
refactor in viz. stack_patches now handles nonuniform patches as well.
parent
d6a89f0b
Changes
6
Expand all
Show whitespace changes
Inline
Side-by-side
Showing
6 changed files
with
186 additions
and
122 deletions
+186
-122
examples/GAN/ConditionalGAN-mnist.py
examples/GAN/ConditionalGAN-mnist.py
+1
-1
examples/GAN/DCGAN-CelebA.py
examples/GAN/DCGAN-CelebA.py
+1
-1
examples/GAN/Image2Image.py
examples/GAN/Image2Image.py
+1
-1
examples/GAN/InfoGAN-mnist.py
examples/GAN/InfoGAN-mnist.py
+5
-5
tensorpack/tfutils/optimizer.py
tensorpack/tfutils/optimizer.py
+1
-1
tensorpack/utils/viz.py
tensorpack/utils/viz.py
+177
-113
No files found.
examples/GAN/ConditionalGAN-mnist.py
View file @
00fdb263
...
...
@@ -125,7 +125,7 @@ def sample(model_path):
pred
=
SimpleDatasetPredictor
(
pred
,
ds
)
for
o
in
pred
.
get_result
():
o
=
o
[
0
]
*
255.0
viz
=
next
(
build_patch_list
(
o
,
nr_row
=
10
,
nr_col
=
10
)
)
viz
=
stack_patches
(
o
,
nr_row
=
10
,
nr_col
=
10
)
viz
=
cv2
.
resize
(
viz
,
(
800
,
800
))
interactive_imshow
(
viz
)
...
...
examples/GAN/DCGAN-CelebA.py
View file @
00fdb263
...
...
@@ -121,7 +121,7 @@ def sample(model_path):
o
,
zs
=
o
[
0
]
+
1
,
o
[
1
]
o
=
o
*
128.0
o
=
o
[:,
:,
:,
::
-
1
]
viz
=
next
(
build_patch_list
(
o
,
nr_row
=
10
,
nr_col
=
10
,
viz
=
True
)
)
viz
=
stack_patches
(
o
,
nr_row
=
10
,
nr_col
=
10
,
viz
=
True
)
if
__name__
==
'__main__'
:
...
...
examples/GAN/Image2Image.py
View file @
00fdb263
...
...
@@ -196,7 +196,7 @@ def sample(datadir, model_path):
pred
=
SimpleDatasetPredictor
(
pred
,
ds
)
for
o
in
pred
.
get_result
():
o
=
o
[
0
][:,
:,
:,
::
-
1
]
next
(
build_patch_list
(
o
,
nr_row
=
3
,
nr_col
=
2
,
viz
=
True
)
)
stack_patches
(
o
,
nr_row
=
3
,
nr_col
=
2
,
viz
=
True
)
if
__name__
==
'__main__'
:
...
...
examples/GAN/InfoGAN-mnist.py
View file @
00fdb263
...
...
@@ -192,24 +192,24 @@ def sample(model_path):
z_noise
=
np
.
random
.
uniform
(
-
1
,
1
,
(
100
,
NOISE_DIM
))
zc
=
np
.
concatenate
((
z_cat
,
z_uni
*
0
,
z_uni
*
0
),
axis
=
1
)
o
=
pred
(
zc
,
z_noise
)[
0
]
viz1
=
next
(
build_patch_list
(
o
,
nr_row
=
10
,
nr_col
=
10
)
)
viz1
=
stack_patches
(
o
,
nr_row
=
10
,
nr_col
=
10
)
viz1
=
cv2
.
resize
(
viz1
,
(
IMG_SIZE
,
IMG_SIZE
))
# show effect of first continous variable with fixed noise
zc
=
np
.
concatenate
((
z_cat
,
z_uni
,
z_uni
*
0
),
axis
=
1
)
o
=
pred
(
zc
,
z_noise
*
0
)[
0
]
viz2
=
next
(
build_patch_list
(
o
,
nr_row
=
10
,
nr_col
=
10
)
)
viz2
=
stack_patches
(
o
,
nr_row
=
10
,
nr_col
=
10
)
viz2
=
cv2
.
resize
(
viz2
,
(
IMG_SIZE
,
IMG_SIZE
))
# show effect of second continous variable with fixed noise
zc
=
np
.
concatenate
((
z_cat
,
z_uni
*
0
,
z_uni
),
axis
=
1
)
o
=
pred
(
zc
,
z_noise
*
0
)[
0
]
viz3
=
next
(
build_patch_list
(
o
,
nr_row
=
10
,
nr_col
=
10
)
)
viz3
=
stack_patches
(
o
,
nr_row
=
10
,
nr_col
=
10
)
viz3
=
cv2
.
resize
(
viz3
,
(
IMG_SIZE
,
IMG_SIZE
))
viz
=
next
(
build_patch_list
(
viz
=
stack_patches
(
[
viz1
,
viz2
,
viz3
],
nr_row
=
1
,
nr_col
=
3
,
border
=
5
,
bgcolor
=
(
255
,
0
,
0
))
)
nr_row
=
1
,
nr_col
=
3
,
border
=
5
,
bgcolor
=
(
255
,
0
,
0
))
interactive_imshow
(
viz
)
...
...
tensorpack/tfutils/optimizer.py
View file @
00fdb263
...
...
@@ -124,7 +124,7 @@ class AccumGradOptimizer(ProxyOptimizer):
An optimizer which accumulates gradients across :math:`k` :meth:`minimize` calls,
and apply them together in every :math:`k`th :meth:`minimize` call.
This is equivalent to using a :math:`k` times larger batch size plus a
:math:`k` times larger learning rate, but use much less memory.
:math:`k` times larger learning rate, but use
s
much less memory.
"""
def
__init__
(
self
,
opt
,
niter
):
...
...
tensorpack/utils/viz.py
View file @
00fdb263
This diff is collapsed.
Click to expand it.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment