Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
8dd254be
Commit
8dd254be
authored
Apr 10, 2019
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
[DoReFa] remove memoization & warning, they are not needed after having custom_gradient
parent
f831a46e
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
0 additions
and
4 deletions
+0
-4
examples/DoReFa-Net/dorefa.py
examples/DoReFa-Net/dorefa.py
+0
-4
No files found.
examples/DoReFa-Net/dorefa.py
View file @
8dd254be
...
@@ -4,14 +4,10 @@
...
@@ -4,14 +4,10 @@
import
tensorflow
as
tf
import
tensorflow
as
tf
from
tensorpack.utils.argtools
import
graph_memoized
@
graph_memoized
def
get_dorefa
(
bitW
,
bitA
,
bitG
):
def
get_dorefa
(
bitW
,
bitA
,
bitG
):
"""
"""
Return the three quantization functions fw, fa, fg, for weights, activations and gradients respectively
Return the three quantization functions fw, fa, fg, for weights, activations and gradients respectively
It's unsafe to call this function multiple times with different parameters
"""
"""
def
quantize
(
x
,
k
):
def
quantize
(
x
,
k
):
n
=
float
(
2
**
k
-
1
)
n
=
float
(
2
**
k
-
1
)
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment