Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
cfd3abe2
Commit
cfd3abe2
authored
Jun 25, 2016
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update readme
parent
a1a957b4
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
10 additions
and
6 deletions
+10
-6
README.md
README.md
+10
-6
No files found.
README.md
View file @
cfd3abe2
# tensorpack
Neural Network Toolbox on TensorFlow
In development but usable. API might change a bit
.
Still in development, but usable
.
See some interesting
[
examples
](
https://github.com/ppwwyyxx/tensorpack/tree/master/examples
)
to learn about the framework:
...
...
@@ -14,10 +14,14 @@ See some interesting [examples](https://github.com/ppwwyyxx/tensorpack/tree/mast
Focused on modularity. Just have to define the three components in training:
1.
The model, or the graph. Define
its input and output
.
`models/`
has some scoped abstraction of common models.
1.
The model, or the graph. Define
the graph as well as its inputs and outputs
.
`models/`
has some scoped abstraction of common models.
2.
The data. All data producer has a unified
`DataFlow`
interface, and this interface can be chained
to perform complex preprocessing. It uses multiprocess to avoid performance bottleneck.
2.
The data. All data producer has an unified
`DataFlow`
interface, and this interface can be chained
to perform complex preprocessing. It uses multiprocess to avoid performance bottleneck on data
loading.
3.
The callbacks. They include everything you want to do besides the training iterations:
change hyperparameters, save model, print logs, run validation, and more.
3.
The callbacks. They include everything you want to do apart from the training iterations:
change hyperparameters, save models, print logs, run validation, and more.
With the above components defined, tensorpack trainer will run the training iterations for you.
Multi-GPU training is ready to use by simply changing the trainer.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment