Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
d814317c
Commit
d814317c
authored
Jul 26, 2016
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
update readme
parent
095c1cd9
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
26 additions
and
6 deletions
+26
-6
README.md
README.md
+12
-6
examples/README.md
examples/README.md
+14
-0
No files found.
README.md
View file @
d814317c
...
@@ -3,7 +3,7 @@ Neural Network Toolbox on TensorFlow
...
@@ -3,7 +3,7 @@ Neural Network Toolbox on TensorFlow
Still in development, but usable.
Still in development, but usable.
See some interesting
[
examples
](
https://github.com/ppwwyyxx/tensorpack/tree/master/
examples
)
to learn about the framework:
See some interesting
[
examples
](
examples
)
to learn about the framework:
+
[
DoReFa-Net: training binary / low bitwidth CNN
](
examples/DoReFa-Net
)
+
[
DoReFa-Net: training binary / low bitwidth CNN
](
examples/DoReFa-Net
)
+
[
Double-DQN for playing Atari games
](
examples/Atari2600
)
+
[
Double-DQN for playing Atari games
](
examples/Atari2600
)
...
@@ -15,13 +15,19 @@ See some interesting [examples](https://github.com/ppwwyyxx/tensorpack/tree/mast
...
@@ -15,13 +15,19 @@ See some interesting [examples](https://github.com/ppwwyyxx/tensorpack/tree/mast
Focus on modularity. You just have to define the following three components to start a training:
Focus on modularity. You just have to define the following three components to start a training:
1.
The model, or the graph. Define the graph as well as its inputs and outputs.
`models/`
has some scoped abstraction of common models.
1.
The model, or the graph. Define the graph as well as its inputs and outputs.
`models/`
has some scoped abstraction of common models.
`LinearWrap`
and
`argscope`
makes large models look simpler.
2.
The data. All data producer has an unified
`DataFlow`
interface, and this interface can be chained
2.
The data. tensorpack allows and encourages complex data processing.
to perform complex preprocessing. It uses multiprocess to avoid performance bottleneck on data
loading.
3.
The callbacks. They include everything you want to do apart from the training iterations:
+ All data producer has an unified `DataFlow` interface, allowing them to be composed to perform complex preprocessing.
change hyperparameters, save models, print logs, run validation, and more.
+ Use Python to easily handle your own data format, yet still keep a good training speed thanks to multiprocess prefetch & TF Queue prefetch.
For example, InceptionV3 can run in the same speed as the official code which reads data using TF operators.
3.
The callbacks, including everything you want to do apart from the training iterations. For example:
+
Change hyperparameters
+
Save models
+
Print some variables of interest
+
Run inference on a test dataset
With the above components defined, tensorpack trainer will run the training iterations for you.
With the above components defined, tensorpack trainer will run the training iterations for you.
Multi-GPU training is ready to use by simply changing the trainer.
Multi-GPU training is ready to use by simply changing the trainer.
...
...
examples/README.md
0 → 100644
View file @
d814317c
# tensorpack examples
Only allow examples with reproducible and meaningful performancce.
+
[
An illustrative mnist example
](
mnist-convnet.py
)
+
[
A small Cifar10 ConvNet with 91% accuracy
](
cifar-convnet.py
)
+
[
A tiny SVHN ConvNet with 97.5% accuracy
](
svhn-digit-convnet.py
)
+
[
Reproduce some reinforcement learning papers
](
Atari2600
)
+
[
char-rnn for fun
](
char-rnn
)
+
[
DisturbLabel, because I don't believe the paper
](
DisturbLabel
)
+
[
DoReFa-Net, binary / low-bitwidth CNN on ImageNet
](
DoReFa-Net
)
+
[
GoogleNet-InceptionV1 with 71% accuracy
](
Inception
)
+
[
ResNet for Cifar10 with similar accuracy, and for SVHN with state-of-the-art accuracy
](
ResNet
)
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment