Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
45ebac95
Commit
45ebac95
authored
Mar 22, 2018
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
add __init__.py (fix #705)
parent
05bf948f
Changes
3
Show whitespace changes
Inline
Side-by-side
Showing
3 changed files
with
2 additions
and
2 deletions
+2
-2
tensorpack/callbacks/summary.py
tensorpack/callbacks/summary.py
+1
-1
tensorpack/contrib/__init__.py
tensorpack/contrib/__init__.py
+0
-0
tensorpack/tfutils/optimizer.py
tensorpack/tfutils/optimizer.py
+1
-1
No files found.
tensorpack/callbacks/summary.py
View file @
45ebac95
...
...
@@ -138,7 +138,7 @@ class SimpleMovingAverage(Callback):
window_size (int): size of the moving window
"""
self
.
_tensor
s
_names
=
[
get_op_tensor_name
(
x
)[
1
]
for
x
in
tensors
]
self
.
_tensor_names
=
[
get_op_tensor_name
(
x
)[
1
]
for
x
in
tensors
]
self
.
_display_names
=
[
get_op_tensor_name
(
x
)[
0
]
for
x
in
tensors
]
self
.
_window
=
int
(
window_size
)
self
.
_queue
=
deque
(
maxlen
=
window_size
)
...
...
tensorpack/contrib/__init__.py
0 → 100644
View file @
45ebac95
tensorpack/tfutils/optimizer.py
View file @
45ebac95
...
...
@@ -134,7 +134,7 @@ class AccumGradOptimizer(ProxyOptimizer):
"""
An optimizer which accumulates gradients across :math:`k` :meth:`minimize` calls,
and apply them together in every :math:`k`th :meth:`minimize` call.
This is
equivalent to
using a :math:`k` times larger batch size plus a
This is
roughly the same as
using a :math:`k` times larger batch size plus a
:math:`k` times larger learning rate, but uses much less memory.
Note that this implementation may not support all models.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment