Commit 4e933ef9 authored by Yuxin Wu's avatar Yuxin Wu

remove opt-requirements.txt

parent 09942e47
include requirements.txt
include opt-requirements.txt
......@@ -2,7 +2,7 @@
# Performance Tuning
__We do not know why your training is slow__.
Performance is different on every machine. So you need to figure out most parts by your own.
Performance is different across machines and tasks. So you need to figure out most parts by your own.
Here's a list of things you can do when your training is slow.
If you're going to open an issue about slow training, PLEASE do them and include your findings.
......@@ -14,14 +14,14 @@ If you're going to open an issue about slow training, PLEASE do them and include
2. If you use queue-based input + dataflow, you can look for the queue size statistics in
training log. Ideally the queue should be near-full (default size is 50).
If the size is near-zero, data is the bottleneck.
3. If the GPU utilization is low, it may be because of slow data, or some ops are inefficient. Also make sure GPUs are not locked in P8 state.
3. If GPU utilization is low, it may be because of slow data, or some ops are inefficient. Also make sure GPUs are not locked in P8 state.
## Benchmark the components
1. Use `DummyConstantInput(shapes)` as the `InputSource`.
so that the iterations doesn't take any data from Python side but train on a constant tensor.
This will help find out the slow operations you're using in the graph.
2. Use `dataflow=FakeData(shapes, random=False)` to replace your original DataFlow by a constant DataFlow.
This has similar effect to (1), i.e., it eliminates the overhead of data.
This is almost the same as (1), i.e., it eliminates the overhead of data.
3. If you're using a TF-based input pipeline you wrote, you can simply run it in a loop and test its speed.
4. Use `TestDataSpeed(mydf).start()` to benchmark your DataFlow.
......@@ -31,7 +31,7 @@ A benchmark will give you more precise information about which part you should i
Understand the [Efficient DataFlow](efficient-dataflow.html) tutorial, so you know what your DataFlow is doing.
Benchmark your DataFlow with modifications and you'll understand why it runs slow. Some examples
Benchmark your DataFlow with modifications and you'll understand which part is the bottleneck. Some examples
include:
1. Remove everything except for the raw reader (and perhaps add some prefetching).
......
......@@ -4,7 +4,7 @@ Faster-RCNN / Mask-RCNN (without FPN) on COCO.
## Dependencies
+ Python 3; TensorFlow >= 1.4.0
+ Install [pycocotools](https://github.com/pdollar/coco/tree/master/PythonAPI/pycocotools), OpenCV.
+ [pycocotools](https://github.com/pdollar/coco/tree/master/PythonAPI/pycocotools), OpenCV.
+ Pre-trained [ResNet model](https://goo.gl/6XjK9V) from tensorpack model zoo.
+ COCO data. It assumes the following directory structure:
```
......
......@@ -21,24 +21,6 @@ except ImportError:
# configure requirements
reqfile = os.path.join(CURRENT_DIR, 'requirements.txt')
req = [x.strip() for x in open(reqfile).readlines()]
reqfile = os.path.join(CURRENT_DIR, 'opt-requirements.txt')
extra_req = [x.strip() for x in open(reqfile).readlines()]
if sys.version_info.major < 3:
extra_req.append('tornado')
# parse scripts
scripts = ['scripts/plot-point.py', 'scripts/dump-model-params.py']
scripts_to_install = []
for s in scripts:
dirname = os.path.dirname(s)
basename = os.path.basename(s)
if basename.endswith('.py'):
basename = basename[:-3]
newname = 'tpk-' + basename # install scripts with a prefix to avoid name confusion
# setup.py could be executed the second time in the same dir
if not os.path.isfile(newname):
shutil.move(s, newname)
scripts_to_install.append(newname)
setup(
name='tensorpack',
......@@ -48,7 +30,7 @@ setup(
install_requires=req,
tests_require=['flake8', 'scikit-image'],
extras_require={
'all': extra_req
'all': ['pillow', 'scipy', 'h5py', 'lmdb>=0.92', 'matplotlib',
'scikit-learn', "tornado; python_version < '3.0'"]
},
scripts=scripts_to_install,
)
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment