Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preprocessing on Scannet #8

Closed
xjdexjt opened this issue Oct 9, 2022 · 23 comments
Closed

Preprocessing on Scannet #8

xjdexjt opened this issue Oct 9, 2022 · 23 comments

Comments

@xjdexjt
Copy link

xjdexjt commented Oct 9, 2022

image
When I run the processing code for scannet, the following results is obtained.
image
image
Then, the code is finished, and the data is not processed.

Besides, I check the code, but I cannot find any processing functions are used in "init".
image
Could you give me any advice on how to processing the data.

Thanks.

@xjdexjt
Copy link
Author

xjdexjt commented Oct 9, 2022

OK, I solved.
Thanks for your great work.

@xjdexjt xjdexjt closed this as completed Oct 9, 2022
@Mythili-kannan
Copy link

I am also facing same issue , can you please help me how did you solve them

@JonasSchult JonasSchult reopened this Oct 14, 2022
@JonasSchult
Copy link
Owner

Hi!
Which problem do you face?

Best,
Jonas

@Mythili-kannan
Copy link

image
while running s3dis_preprocessing.py i got just this screen, can you please let me know how to proceed further.

Thanks in advance

@JonasSchult
Copy link
Owner

Hi!
You miss the command preprocess when calling s3dis_preprocessing.py.

                                                 |
                                                 v
datasets/preprocessing/s3dis_preprocessing.py preprocess \
--data_dir="PATH_TO_Stanford3dDataset_v1.2" \
--save_dir="../../data/processed/s3dis"

Let me know if this solves your problem :)

Best,
Jonas

@Mythili-kannan
Copy link

Mythili-kannan commented Oct 14, 2022

got it thank you so much will try them, one more doubt is there is any inference script which will take single .ply and gives out instances , it will be very helpful

@Mythili-kannan
Copy link

Mythili-kannan commented Oct 14, 2022

getting this error when i run script of preprocessing of s3dis
FILE SIZE DOES NOT MATCH FOR /home/sai/mythili_wd/Segmentation/Mask3D/Stanford3dDataset_v1.2_Aligned_Version/Area_5/lobby_1/lobby_1.txt
(1162766 vs. 1223236)

@JonasSchult
Copy link
Owner

got it thank you so much will try them, one more doubt is there is any inference script which will take single .ply and gives out instances , it will be very helpful

totally agree! #11 deals with this, already. Will take some time for this :)

getting this error when i run script of preprocessing of s3dis FILE SIZE DOES NOT MATCH FOR /home/sai/mythili_wd/Segmentation/Mask3D/Stanford3dDataset_v1.2_Aligned_Version/Area_5/lobby_1/lobby_1.txt (1162766 vs. 1223236)

this is only a debug message. You can ignore it.

@Mythili-kannan
Copy link

ValueError: the number of columns changed from 6 to 5 at row 180389; use usecols to select a subset and avoid this error

Thanks for the reply ,but at the end its showing error for Area_5 of s3dis

@JonasSchult
Copy link
Owner

Interesting! I will take a look into this tomorrow. Thanks for letting me know!

@Mythili-kannan
Copy link

Thank you waiting for the reply!!

@JonasSchult
Copy link
Owner

I figured out what went wrong. Thank you very much for bringing it to my attention! :)

There is a coding error in Area_5/hallway_6/Annotations/ceiling_1.txt in line 180389.
coding_error
I just deleted the bad character. That solved the issue.

Moreover, you need to rename Area_6/copyRoom_1/copy_Room_1.txt to Area_6/copyRoom_1/copyRoom_1.txt as it breaks the naming convention.

I fixed these issues early in the project and forgot about describing them in the README. I will adapt the s3dis preprocessing script that it won't cause trouble in the future.

Thanks again! :)

@Mythili-kannan
Copy link

Mythili-kannan commented Oct 15, 2022

Thanks for the reply, that error got fixed, awaiting for inference code :)

@JonasSchult
Copy link
Owner

Here is the test script for S3DIS.
I will make it a bit clearer in the readme ;)

@Mythili-kannan
Copy link

Thank you so much for the guidance :)

@Mythili-kannan
Copy link

Mythili-kannan commented Oct 17, 2022

Hi can you please help me here,after i get generated output in eval_output folder, how can i apply the instance on the individual pointcloud

@xjdexjt
Copy link
Author

xjdexjt commented Oct 17, 2022

Which S3DIS version do you use?
Stanford3dDataset_v1.2_Aligned_Version or Stanford3dDataset_v1.2?
Thanks.
@JonasSchult

@Mythili-kannan
Copy link

Yes thats correct

@xjdexjt
Copy link
Author

xjdexjt commented Oct 17, 2022

image
When I run the training code using the S3DIS data generated from "Stanford3dDataset_v1.2_Aligned_Version", I met this error, which means that the value of "downsampling" is not defined in the configuration.
When I remove the "downsampling" is s3dis.yaml, training code run successfully.
image

Could you tell me if this operation is right?
Thanks.

@JonasSchult
Copy link
Owner

Hi,

please pull the latest version of the codebase. The downsampling parameter is deprecated.

We use Stanford3dDataset_v1.2, not the aligned version. :)

Hi can you please help me here,after i get generated output in eval_output folder, how can i apply the instance on the individual pointcloud

The evaluation will be automatically started and outputs the results. Or did you mean how to get the visualizations?
Try the parameter general.save_visualizations=true.

Best
Jonas

@JonasSchult
Copy link
Owner

I will close the issue for now. If some problems still exist, please re-open it! :)

Best,
Jonas

@MarcoAyman
Copy link

I downloaded the dataset (Stanford3dDataset_v1.2) but when i am trying to use the script i get a strange error and its not running

C:\Users\mhanna\Desktop\S3DIS_preprocessing>python s3dis_preprocessing.py preprocess --data_dir=C:\Users\mhanna\Desktop\S3DIS_preprocessing\data\raw\Stanford3dDataset_v1.2 --save_dir=C:\Users\mhanna\Desktop\S3DIS_preprocessing\data\processed\s3dis --n_jobs=-1
2023-12-13 15:32:51.176 | INFO | base_preprocessing:preprocess:47 - Tasks for Area_1: 44
[Parallel(n_jobs=12)]: Using backend LokyBackend with 12 concurrent workers.
2023-12-13 15:32:53.891 | ERROR | fire.core:_CallAndUpdateTrace:691 - An error has been caught in function '_CallAndUpdateTrace', process 'MainProcess' (13104), thread 'MainThread' (27656):
joblib.externals.loky.process_executor._RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\python\lib\site-packages\joblib\externals\loky\process_executor.py", line 463, in _process_worker
r = call_item()
File "C:\python\lib\site-packages\joblib\externals\loky\process_executor.py", line 291, in call
return self.fn(*self.args, **self.kwargs)
File "C:\python\lib\site-packages\joblib\parallel.py", line 589, in call
return [func(*args, **kwargs)
File "C:\python\lib\site-packages\joblib\parallel.py", line 589, in
return [func(*args, **kwargs)
File "C:\Users\mhanna\Desktop\S3DIS_preprocessing\s3dis_preprocessing.py", line 151, in process_file
pcd_size = self._buf_count_newlines_gen(os.path.join(filepath, scene_name, "Annotations", f"{scene_name}.txt"))
File "C:\Users\mhanna\Desktop\S3DIS_preprocessing\s3dis_preprocessing.py", line 92, in _buf_count_newlines_gen
with open(fname, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\mhanna\Desktop\S3DIS_preprocessing\data\raw\Stanford3dDataset_v1.2\Area_1\hallway_4.txt'
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):

File "C:\Users\mhanna\Desktop\S3DIS_preprocessing\s3dis_preprocessing.py", line 283, in
Fire(S3DISPreprocessing)
│ └ <class 'main.S3DISPreprocessing'>
└ <function Fire at 0x000001DD1DC770A0>

File "C:\python\lib\site-packages\fire\core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
│ │ │ │ │ └ 's3dis_preprocessing.py'
│ │ │ │ └ {}
│ │ │ └ Namespace(verbose=False, interactive=False, separator='-', completion=None, help=False, trace=False)
│ │ └ ['preprocess', '--data_dir=C:\Users\mhanna\Desktop\S3DIS_preprocessing\data\raw\Stanford3dDataset_v1.2', '--save_dir=C...
│ └ <class 'main.S3DISPreprocessing'>
└ <function _Fire at 0x000001DD1E01DC60>

File "C:\python\lib\site-packages\fire\core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
│ └ <function _CallAndUpdateTrace at 0x000001DD1E01DD80>
└ <bound method BasePreprocessing.preprocess of <main.S3DISPreprocessing object at 0x000001DD1EC4BFA0>>

File "C:\python\lib\site-packages\fire\core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
│ │ └ {}
│ └ []
└ <bound method BasePreprocessing.preprocess of <main.S3DISPreprocessing object at 0x000001DD1EC4BFA0>>

File "C:\Users\mhanna\Desktop\S3DIS_preprocessing\base_preprocessing.py", line 48, in preprocess
parallel_results = Parallel(n_jobs=self.n_jobs, verbose=10)(
│ │ └ 12
│ └ <main.S3DISPreprocessing object at 0x000001DD1EC4BFA0>
└ <class 'joblib.parallel.Parallel'>

File "C:\python\lib\site-packages\joblib\parallel.py", line 1952, in call
return output if self.return_generator else list(output)
│ │ │ └ <generator object Parallel._get_outputs at 0x000001DD1ECF9D20>
│ │ └ False
│ └ Parallel(n_jobs=12)
└ <generator object Parallel._get_outputs at 0x000001DD1ECF9D20>

File "C:\python\lib\site-packages\joblib\parallel.py", line 1595, in _get_outputs
yield from self._retrieve()
│ └ <function Parallel._retrieve at 0x000001DD1EC91630>
└ Parallel(n_jobs=12)

File "C:\python\lib\site-packages\joblib\parallel.py", line 1699, in _retrieve
self._raise_error_fast()
│ └ <function Parallel._raise_error_fast at 0x000001DD1EC916C0>
└ Parallel(n_jobs=12)

File "C:\python\lib\site-packages\joblib\parallel.py", line 1734, in _raise_error_fast
error_job.get_result(self.timeout)
│ │ │ └ None
│ │ └ Parallel(n_jobs=12)
│ └ <function BatchCompletionCallBack.get_result at 0x000001DD1EC90790>
└ <joblib.parallel.BatchCompletionCallBack object at 0x000001DD1ED0F5E0>

File "C:\python\lib\site-packages\joblib\parallel.py", line 736, in get_result
return self._return_or_raise()
│ └ <function BatchCompletionCallBack._return_or_raise at 0x000001DD1EC90820>
└ <joblib.parallel.BatchCompletionCallBack object at 0x000001DD1ED0F5E0>

File "C:\python\lib\site-packages\joblib\parallel.py", line 754, in _return_or_raise
raise self._result
└ <joblib.parallel.BatchCompletionCallBack object at 0x000001DD1ED0F5E0>

FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\mhanna\Desktop\S3DIS_preprocessing\data\raw\Stanford3dDataset_v1.2\Area_1\hallway_4.txt'

@lionlai1989
Copy link

There are two things:

  • The first thing is that the error described here only appears in the "Aligned_Version".
  • After following the steps to fix the encoding error and file naming error, I still got the following error:
2024-05-22 03:40:22.826 | INFO     | datasets.preprocessing.base_preprocessing:preprocess:47 - Tasks for Area_5: 68
[Parallel(n_jobs=8)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=8)]: Done   2 tasks      | elapsed:    2.8s
[Parallel(n_jobs=8)]: Done   9 tasks      | elapsed:    9.2s
FILE SIZE DOES NOT MATCH FOR input_data/s3dis/Stanford3dDataset_v1.2_Aligned_Version/Area_5/lobby_1/lobby_1.txt
(1162766 vs. 1223236)

Please help me to fix it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants