Commit 8dd254be authored by Yuxin Wu's avatar Yuxin Wu

[DoReFa] remove memoization & warning, they are not needed after having custom_gradient

parent f831a46e
...@@ -4,14 +4,10 @@ ...@@ -4,14 +4,10 @@
import tensorflow as tf import tensorflow as tf
from tensorpack.utils.argtools import graph_memoized
@graph_memoized
def get_dorefa(bitW, bitA, bitG): def get_dorefa(bitW, bitA, bitG):
""" """
Return the three quantization functions fw, fa, fg, for weights, activations and gradients respectively Return the three quantization functions fw, fa, fg, for weights, activations and gradients respectively
It's unsafe to call this function multiple times with different parameters
""" """
def quantize(x, k): def quantize(x, k):
n = float(2 ** k - 1) n = float(2 ** k - 1)
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment