Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
f128a5c6
Commit
f128a5c6
authored
Feb 27, 2020
by
Julius Simonelli
Committed by
GitHub
Feb 27, 2020
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
fix typo (#1403)
parent
83e00d7c
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
1 addition
and
1 deletion
+1
-1
docs/tutorial/trainer.md
docs/tutorial/trainer.md
+1
-1
No files found.
docs/tutorial/trainer.md
View file @
f128a5c6
...
...
@@ -67,7 +67,7 @@ All it does is building your model (which you have to provide) once
For data-parallel multi-GPU training, different
[
multi-GPU trainers
](
../modules/train.html
)
implement different distribution strategies.
They take care of device placement, gradient averaging and synchron
o
ization
They take care of device placement, gradient averaging and synchronization
in the efficient way, which is why multi-GPU training in tensorpack
is up to
[
5x faster than Keras
](
https://github.com/tensorpack/benchmarks/tree/master/other-wrappers
)
.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment