Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
1761a71f
Commit
1761a71f
authored
Nov 02, 2017
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
py3 doesn't have sys.maxint
parent
fc079e63
Changes
2
Hide whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
5 additions
and
3 deletions
+5
-3
tensorpack/tfutils/optimizer.py
tensorpack/tfutils/optimizer.py
+3
-0
tensorpack/train/base.py
tensorpack/train/base.py
+2
-3
No files found.
tensorpack/tfutils/optimizer.py
View file @
1761a71f
...
@@ -134,6 +134,9 @@ class AccumGradOptimizer(ProxyOptimizer):
...
@@ -134,6 +134,9 @@ class AccumGradOptimizer(ProxyOptimizer):
and apply them together in every :math:`k`th :meth:`minimize` call.
and apply them together in every :math:`k`th :meth:`minimize` call.
This is equivalent to using a :math:`k` times larger batch size plus a
This is equivalent to using a :math:`k` times larger batch size plus a
:math:`k` times larger learning rate, but uses much less memory.
:math:`k` times larger learning rate, but uses much less memory.
Note that this implementation may not support all models.
E.g., it doesn't support sparse gradient update.
"""
"""
def
__init__
(
self
,
opt
,
niter
):
def
__init__
(
self
,
opt
,
niter
):
...
...
tensorpack/train/base.py
View file @
1761a71f
...
@@ -7,7 +7,6 @@ import weakref
...
@@ -7,7 +7,6 @@ import weakref
import
time
import
time
from
six.moves
import
range
from
six.moves
import
range
import
six
import
six
import
sys
from
..callbacks
import
(
from
..callbacks
import
(
Callback
,
Callbacks
,
Monitors
,
TrainingMonitor
,
Callback
,
Callbacks
,
Monitors
,
TrainingMonitor
,
...
@@ -235,7 +234,7 @@ class Trainer(object):
...
@@ -235,7 +234,7 @@ class Trainer(object):
def
train
(
self
,
def
train
(
self
,
callbacks
,
monitors
,
callbacks
,
monitors
,
session_creator
,
session_init
,
session_creator
,
session_init
,
steps_per_epoch
,
starting_epoch
=
1
,
max_epoch
=
sys
.
maxint
-
1
):
steps_per_epoch
,
starting_epoch
=
1
,
max_epoch
=
9999999
):
"""
"""
Implemented by:
Implemented by:
...
@@ -254,7 +253,7 @@ class Trainer(object):
...
@@ -254,7 +253,7 @@ class Trainer(object):
def
train_with_defaults
(
def
train_with_defaults
(
self
,
callbacks
=
None
,
monitors
=
None
,
self
,
callbacks
=
None
,
monitors
=
None
,
session_creator
=
None
,
session_init
=
None
,
session_creator
=
None
,
session_init
=
None
,
steps_per_epoch
=
None
,
starting_epoch
=
1
,
max_epoch
=
sys
.
maxint
-
1
):
steps_per_epoch
=
None
,
starting_epoch
=
1
,
max_epoch
=
9999999
):
"""
"""
Same as :meth:`train()`, but will:
Same as :meth:`train()`, but will:
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment