Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'add downloading of scannet' #206

Merged
merged 33 commits into from
Mar 30, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
55413a8
Update README.md
tchaton Mar 26, 2020
f6195dc
Update README.md
tchaton Mar 26, 2020
6fc0cf3
Update README.md
tchaton Mar 26, 2020
af5fa86
Update README.md
tchaton Mar 26, 2020
8735b9a
Update README.md
tchaton Mar 26, 2020
efe78c7
Update README.md
tchaton Mar 26, 2020
2b1365f
Update README.md
tchaton Mar 26, 2020
4aa195f
'add downloading of scannet'
tchaton Mar 27, 2020
e5e77f5
Merge branch 'scannet' into tchaton-patch-1
tchaton Mar 27, 2020
be43dab
Merge pull request #204 from nicolas-chaulet/tchaton-patch-1
tchaton Mar 27, 2020
0a5f2b4
Update README.md
tchaton Mar 27, 2020
ccc08ff
Update README.md
tchaton Mar 27, 2020
28c6042
Update README.md
tchaton Mar 27, 2020
ccdf952
add failed loading
tchaton Mar 27, 2020
828e8ad
'Loading seems to work fine'
tchaton Mar 28, 2020
988503f
'processing scannet'
tchaton Mar 28, 2020
92db7d3
'update'
tchaton Mar 28, 2020
9ad96d5
'add delete features if used'
tchaton Mar 28, 2020
36c2ae6
add multiprocessing to accelerate computation
tchaton Mar 28, 2020
4487eaa
correct tests
tchaton Mar 28, 2020
d24bd99
add quantizing_func to SparseInput
tchaton Mar 28, 2020
32a5269
add print
tchaton Mar 28, 2020
5c89e39
remove need from test dataset
tchaton Mar 28, 2020
70b76f5
update tracker and download directly within scannet
tchaton Mar 29, 2020
e0c0661
remove debug
tchaton Mar 29, 2020
c271c45
add res16unet
tchaton Mar 29, 2020
8470cd8
update training config
tchaton Mar 29, 2020
cb9af2c
Fix tests and flake8
nicolas-chaulet Mar 30, 2020
a3e15ff
Clean up scannet
nicolas-chaulet Mar 30, 2020
fd71e41
Fix flake8
nicolas-chaulet Mar 30, 2020
91854c7
Fix tests
nicolas-chaulet Mar 30, 2020
47efcda
Removed scannet tracker
nicolas-chaulet Mar 30, 2020
500b49f
remap labels
nicolas-chaulet Mar 30, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 8 additions & 2 deletions .devcontainer/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -35,5 +35,11 @@ RUN pip3 install setuptools poetry
COPY pyproject.toml .
COPY poetry.lock .

RUN POETRY_VIRTUALENVS_CREATE=false poetry install
RUN pre-commit install
RUN poetry install --no-root
RUN poetry env info --path > /python_path
RUN cat /python_path

RUN git clone https://github.com/StanfordVL/MinkowskiEngine.git /tmp/ME \
&& cd /tmp/ME \
&& export VENV=`cat /python_path` \
&& $VENV/bin/python setup.py install --cpu_only
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -19,3 +19,4 @@ measurements/*.pickle
_build
docs_old
/test/kernels/dispositions
*.egg-info*
63 changes: 63 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,69 @@ This is a framework for running common deep learning models for point cloud anal

The framework allows lean and yet complex model to be built with minimum effort and great reproducibility.

## Project structure

```bash
├─ benchmark # Output from various benchmark runs
├─ conf # All configurations for training nad evaluation leave there
├─ dashboard # A collection of notebooks that allow result exploration and network debugging
├─ docker # Docker image that can be used for inference or training
├─ docs # All the doc
├─ eval.py # Eval script
├─ find_neighbour_dist.py # Script to find optimal #neighbours within neighbour search operations
├─ forward_scripts # Script that runs a forward pass on possibly non annotated data
├─ outputs # All outputs from your runs sorted by date
├─ scripts # Some scripts to help manage the project
├─ src
├─ core # Core components
├─ datasets # All code related to datasets
├─ metrics # All metrics and trackers
├─ models # All models
├─ modules # Basic modules that can be used in a modular way
├─ utils # Various utils
└─ visualization # Visualization
├─ test
└─ train.py # Main script to launch a training
```

As a general philosophy we have split datasets and models by task. For example, datasets has three subfolders:

- segmentation
- classification
- registration

where each folder contains the dataset related to each task.

## Methods currently implemented

* **[PointNet](https://github.com/nicolas-chaulet/deeppointcloud-benchmarks/blob/master/src/modules/PointNet/modules.py#L54)** from Charles R. Qi *et al.*: [PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation](https://arxiv.org/abs/1612.00593) (CVPR 2017)
* **[PointNet++](https://github.com/nicolas-chaulet/deeppointcloud-benchmarks/tree/master/src/modules/pointnet2)** from Charles from Charles R. Qi *et al.*: [PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space](https://arxiv.org/abs/1706.02413)
* **[RSConv](https://github.com/nicolas-chaulet/deeppointcloud-benchmarks/tree/master/src/modules/RSConv)** from Yongcheng Liu *et al.*: [Relation-Shape Convolutional Neural Network for Point Cloud Analysis](https://arxiv.org/abs/1904.07601) (CVPR 2019)
* **[RandLA-Net](https://github.com/nicolas-chaulet/deeppointcloud-benchmarks/tree/master/src/modules/RandLANet)** from Qingyong Hu *et al.*: [RandLA-Net: Efficient Semantic Segmentation of Large-Scale Point Clouds](https://arxiv.org/abs/1911.11236)
* **[PointCNN](https://github.com/nicolas-chaulet/deeppointcloud-benchmarks/tree/master/src/modules/PointCNN)** from Yangyan Li *et al.*: [PointCNN: Convolution On X-Transformed Points](https://arxiv.org/abs/1801.07791) (NIPS 2018)
* **[KPConv](https://github.com/nicolas-chaulet/deeppointcloud-benchmarks/tree/master/src/modules/KPConv)** from Hugues Thomas *et al.*: [KPConv: Flexible and Deformable Convolution for Point Clouds](https://arxiv.org/abs/1801.07791) (ICCV 2019)
* **[MinkowskiEngine](https://github.com/nicolas-chaulet/deeppointcloud-benchmarks/tree/master/src/modules/MinkowskiEngine)** from Christopher Choy *et al.*: [4D Spatio-Temporal ConvNets: Minkowski Convolutional Neural Networks](https://arxiv.org/abs/1904.08755) (CVPR'19)


## Available datasets
### Segmentation
* **[Scannet](https://github.com/ScanNet/ScanNet)** from Angela Dai *et al.*: [ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes](https://arxiv.org/abs/1702.04405)

* **[S3DIS](http://buildingparser.stanford.edu/dataset.html)** from Iro Armeni *et al.*: [Joint 2D-3D-Semantic Data for Indoor Scene Understanding](https://arxiv.org/abs/1702.01105)
```
* S3DIS 1x1
* S3DIS Room
* S3DIS Fused
```

* **[Shapenet](https://www.shapenet.org/)** from Iro Armeni *et al.*: [ShapeNet: An Information-Rich 3D Model Repository](https://arxiv.org/abs/1512.03012)

### Registration
* **[3DMatch](http://3dmatch.cs.princeton.edu)** from Andy Zeng *et al.*: [3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions](https://arxiv.org/abs/1603.08182)

### Classification
* **[ModelNet](https://modelnet.cs.princeton.edu)** from Zhirong Wu *et al.*: [3D ShapeNets: A Deep Representation for Volumetric Shapes](https://people.csail.mit.edu/khosla/papers/cvpr2015_wu.pdf)

## Getting started
### Requirements:
* CUDA > 10
Expand Down
39 changes: 39 additions & 0 deletions conf/data/segmentation/scannet.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
data:
class: scannet.ScannetDataset
task: segmentation
dataroot: data
grid_size: 0.05
version: 'v2'
use_instance_labels: False
use_instance_bboxes: False
donotcare_class_ids: []
max_num_point: None
process_workers: 1
train_transform:
- transform: ToSparseInput
params:
grid_size: ${data.grid_size}
mode: "mean"
quantizing_func: "round"
- transform: AddOnes
- transform: AddFeatsByKeys
params:
list_add_to_x: [True,True]
feat_names: ["ones","rgb"]
input_nc_feats: [1,3]
stricts: [True, True]
delete_feats: [True, True]
val_transform:
- transform: ToSparseInput
params:
grid_size: ${data.grid_size}
mode: "mean"
quantizing_func: "round"
- transform: AddOnes
- transform: AddFeatsByKeys
params:
list_add_to_x: [True,True]
feat_names: ["ones","rgb"]
input_nc_feats: [1,3]
stricts: [True, True]
delete_feats: [True, True]
98 changes: 97 additions & 1 deletion conf/models/segmentation/minkowski_baseline.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -58,4 +58,100 @@ models:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "MinkUNet34C"
D: 3
D: 3

Res16UNet14:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet14"
D: 3

Res16UNet18:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet18"
D: 3

Res16UNet34:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet34"
D: 3

Res16UNet14A:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet14A"
D: 3

Res16UNet14A2:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet14A2"
D: 3

Res16UNet14B:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet14B"
D: 3

Res16UNet14B2:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet14B2"
D: 3

Res16UNet14B3:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet14B3"
D: 3

Res16UNet14C:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet14C"
D: 3

Res16UNet14D:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet14D"
D: 3

Res16UNet18A:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet18A"
D: 3

Res16UNet18B:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet18B"
D: 3

Res16UNet18D:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet18D"
D: 3

Res16UNet34A:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet34A"
D: 3

Res16UNet34B:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet34B"
D: 3

Res16UNet34C:
class: minkowski.Minkowski_Baseline_Model
conv_type: "SPARSE"
model_name: "Res16UNet34C"
D: 3
44 changes: 44 additions & 0 deletions conf/training/minkowski_scannet.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Ref: https://github.com/chrischoy/SpatioTemporalSegmentation/blob/master/config.py
training:
epochs: 100
num_workers: 4
batch_size: 16
shuffle: True
cuda: 1
precompute_multi_scale: False # Compute multiscate features on cpu for faster training / inference
optim:
base_lr: 0.01
# accumulated_gradient: -1 # Accumulate gradient accumulated_gradient * batch_size
grad_clip: -1
optimizer:
class: SGD
params:
lr: ${training.optim.base_lr} # The path is cut from training
momentum: 0.9
dampening: 0.1
weight_decay: 1e-4
lr_scheduler: ${lr_scheduler}
bn_scheduler:
bn_policy: "step_decay"
params:
bn_momentum: 0.02
bn_decay: 1
decay_step : 10
bn_clip : 1e-2
weight_name: "latest" # Used during resume, select with model to load from [miou, macc, acc..., latest]
enable_cudnn: True
checkpoint_dir: ""

# Those arguments within experiment defines which model, dataset and task to be created for benchmarking
# parameters for Weights and Biases
wandb:
entity: ""
project: scannet
log: False
notes:
name:
public: True # It will be display the model within wandb log, else not.

# parameters for TensorBoard Visualization
tensorboard:
log: True
6 changes: 5 additions & 1 deletion docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,11 @@ RUN pip3 install setuptools poetry
COPY pyproject.toml .
COPY poetry.lock .

RUN POETRY_VIRTUALENVS_CREATE=false poetry install -E MinkowskiEngine --no-dev --no-root && rm -rf /root/.cache
RUN POETRY_VIRTUALENVS_CREATE=false poetry install --no-dev --no-root && rm -rf /root/.cache

RUN git clone https://github.com/StanfordVL/MinkowskiEngine.git /tmp/ME \
&& cd /tmp/ME \
&& python3 setup.py install --cpu_only

ARG MODEL=""
ENV WORKDIR=/dpb
Expand Down
40 changes: 39 additions & 1 deletion poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 1 addition & 3 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,6 @@ name = "deeppointcloud_benchmark"
version = "0.1.0"
description = ""
authors = ["Nicolas <nicolas.chaulet@gmail.com>"]
packages = [
{ include = "src" }
]

[tool.poetry.dependencies]
python = "^3.6"
Expand Down Expand Up @@ -40,6 +37,7 @@ param = "^1.9.3"
codecov = "^2.0.16"
sphinx = "^2.4.4"
sphinx-autobuild = "^0.7.1"
gpustat = "^0.6.0"

[tool.poetry.extras]
MinkowskiEngine = ["MinkowskiEngine"]
Expand Down
Loading