Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in
Toggle navigation
S
seminar-breakout
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Shashank Suhas
seminar-breakout
Commits
8380cfa7
Commit
8380cfa7
authored
Dec 31, 2017
by
Yuxin Wu
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
[FasterRCNN] write evaluation to monitors
parent
230efcc1
Changes
2
Show whitespace changes
Inline
Side-by-side
Showing
2 changed files
with
9 additions
and
4 deletions
+9
-4
examples/FasterRCNN/eval.py
examples/FasterRCNN/eval.py
+4
-0
examples/FasterRCNN/train.py
examples/FasterRCNN/train.py
+5
-4
No files found.
examples/FasterRCNN/eval.py
View file @
8380cfa7
...
...
@@ -129,6 +129,7 @@ def eval_on_dataflow(df, detect_func):
# https://github.com/pdollar/coco/blob/master/PythonAPI/pycocoEvalDemo.ipynb
def
print_evaluation_scores
(
json_file
):
ret
=
{}
assert
config
.
BASEDIR
and
os
.
path
.
isdir
(
config
.
BASEDIR
)
annofile
=
os
.
path
.
join
(
config
.
BASEDIR
,
'annotations'
,
...
...
@@ -139,9 +140,12 @@ def print_evaluation_scores(json_file):
cocoEval
.
evaluate
()
cocoEval
.
accumulate
()
cocoEval
.
summarize
()
ret
[
'mAP(bbox)'
]
=
cocoEval
.
stats
[
0
]
if
config
.
MODE_MASK
:
cocoEval
=
COCOeval
(
coco
,
cocoDt
,
'segm'
)
cocoEval
.
evaluate
()
cocoEval
.
accumulate
()
cocoEval
.
summarize
()
ret
[
'mAP(segm)'
]
=
cocoEval
.
stats
[
0
]
return
ret
examples/FasterRCNN/train.py
View file @
8380cfa7
...
...
@@ -8,7 +8,6 @@ import cv2
import
shutil
import
itertools
import
tqdm
import
math
import
numpy
as
np
import
json
import
tensorflow
as
tf
...
...
@@ -313,7 +312,9 @@ class EvalCallback(Callback):
logger
.
get_logger_dir
(),
'outputs{}.json'
.
format
(
self
.
global_step
))
with
open
(
output_file
,
'w'
)
as
f
:
json
.
dump
(
all_results
,
f
)
print_evaluation_scores
(
output_file
)
scores
=
print_evaluation_scores
(
output_file
)
for
k
,
v
in
scores
.
items
():
self
.
trainer
.
monitors
.
put_scalar
(
k
,
v
)
def
_trigger_epoch
(
self
):
if
self
.
epoch_num
in
self
.
epochs_to_eval
:
...
...
@@ -359,8 +360,8 @@ if __name__ == '__main__':
else
:
logger
.
set_logger_dir
(
args
.
logdir
)
print_config
()
stepnum
=
3
00
warmup_epoch
=
max
(
math
.
ceil
(
500.0
/
stepnum
),
5
)
stepnum
=
5
00
warmup_epoch
=
3
factor
=
get_batch_factor
()
cfg
=
TrainConfig
(
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment