Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
8d668003
Commit
8d668003
authored
Feb 10, 2018
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
docs change
parent
228aead3
Changes
5
Hide whitespace changes
Inline
Side-by-side
Showing
5 changed files
with
10 additions
and
6 deletions
+10
-6
docs/tutorial/inference.md
docs/tutorial/inference.md
+2
-0
examples/FasterRCNN/eval.py
examples/FasterRCNN/eval.py
+1
-1
examples/FasterRCNN/train.py
examples/FasterRCNN/train.py
+3
-3
examples/README.md
examples/README.md
+3
-1
tensorpack/input_source/input_source.py
tensorpack/input_source/input_source.py
+1
-1
No files found.
docs/tutorial/inference.md
View file @
8d668003
...
...
@@ -15,6 +15,8 @@ There are two ways to do inference during training.
This will further support prefetch & data-parallel inference.
More details to come.
In both methods, your tower function will be called again, with
`TowerContext.is_training==False`
.
You can build a different graph using this predicate.
## Inference After Training
...
...
examples/FasterRCNN/eval.py
View file @
8d668003
...
...
@@ -91,7 +91,7 @@ def detect_one_image(img, model_func):
return
results
def
eval_
on_dataflow
(
df
,
detect_func
):
def
eval_
coco
(
df
,
detect_func
):
"""
Args:
df: a DataFlow which produces (image, image_id)
...
...
examples/FasterRCNN/train.py
View file @
8d668003
...
...
@@ -39,7 +39,7 @@ from viz import (
draw_predictions
,
draw_final_outputs
)
from
common
import
print_config
from
eval
import
(
eval_
on_dataflow
,
detect_one_image
,
print_evaluation_scores
,
DetectionResult
)
eval_
coco
,
detect_one_image
,
print_evaluation_scores
,
DetectionResult
)
import
config
...
...
@@ -280,7 +280,7 @@ def visualize(model_path, nr_visualize=50, output_dir='output'):
def
offline_evaluate
(
pred_func
,
output_file
):
df
=
get_eval_dataflow
()
all_results
=
eval_
on_dataflow
(
all_results
=
eval_
coco
(
df
,
lambda
img
:
detect_one_image
(
img
,
pred_func
))
with
open
(
output_file
,
'w'
)
as
f
:
json
.
dump
(
all_results
,
f
)
...
...
@@ -309,7 +309,7 @@ class EvalCallback(Callback):
self
.
epochs_to_eval
.
add
(
self
.
trainer
.
max_epoch
)
def
_eval
(
self
):
all_results
=
eval_
on_dataflow
(
self
.
df
,
lambda
img
:
detect_one_image
(
img
,
self
.
pred
))
all_results
=
eval_
coco
(
self
.
df
,
lambda
img
:
detect_one_image
(
img
,
self
.
pred
))
output_file
=
os
.
path
.
join
(
logger
.
get_logger_dir
(),
'outputs{}.json'
.
format
(
self
.
global_step
))
with
open
(
output_file
,
'w'
)
as
f
:
...
...
examples/README.md
View file @
8d668003
...
...
@@ -3,12 +3,14 @@
Training examples with __reproducible performance__.
__"Reproduce" should always means reproduce performance__
.
Reproducing a method is usually easy, but you don't know whether you've made mistakes, because wrong code will often appear to work.
Reproduci
ble performance results are what really matters
.
Reproduci
ng __performance__ results is what really matters, and is something that's hardly seen on github
.
See
[
Unawareness of Deep Learning Mistakes
](
https://medium.com/@ppwwyyxx/unawareness-of-deep-learning-mistakes-d5b5774da0ba
)
.
## Getting Started:
These examples don't have meaningful performance numbers. They are supposed to be just demos.
+
[
An illustrative mnist example with explanation of the framework
](
mnist-convnet.py
)
+
The same mnist example using
[
tf-slim
](
mnist-tfslim.py
)
, and
[
with weights visualizations
](
mnist-visualizations.py
)
+
A tiny
[
Cifar ConvNet
](
cifar-convnet.py
)
and
[
SVHN ConvNet
](
svhn-digit-convnet.py
)
...
...
tensorpack/input_source/input_source.py
View file @
8d668003
...
...
@@ -223,7 +223,7 @@ class QueueInput(FeedfreeInput):
with
self
.
cached_name_scope
():
# in TF there is no API to get queue capacity, so we can only summary the size
size
=
tf
.
cast
(
self
.
queue
.
size
(),
tf
.
float32
,
name
=
'queue_size'
)
size_ema_op
=
add_moving_summary
(
size
,
collection
=
None
)[
0
]
.
op
size_ema_op
=
add_moving_summary
(
size
,
collection
=
None
,
decay
=
0.5
)[
0
]
.
op
return
RunOp
(
lambda
:
size_ema_op
,
run_before
=
False
,
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment