Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Classification on large *.ply data #138

Closed
tsly123 opened this issue Dec 28, 2018 · 7 comments
Closed

Classification on large *.ply data #138

tsly123 opened this issue Dec 28, 2018 · 7 comments

Comments

@tsly123
Copy link

tsly123 commented Dec 28, 2018

Hi,
Thank you for sharing your work. My 3D face dataset contains large ply files. For example:

ply
format ascii 1.0
element vertex 166368
property float x
property float y
property float z
element face 156797
property list uchar int vertex_indices
end_header
-50.8063 31.0753 83.1526
...
3 37583 37611 37610 
...

And i should do the classification basing on these ply files but I have no idea how to implement it using PointCNN. What should i do?

Thank you.
tsly

@TSchattschneider
Copy link
Contributor

TSchattschneider commented Jan 4, 2019

You could implement a function similar to load_cls() in data_utils.py and use the plyfile python library instead of h5py to load in your point data from your PLY. You would for example similarly put all the points into a list (called points in the referenced function) and then use numpy's concatenation function to create a single numpy array.
You could then return that array along with a corresponding label array and use this function to load in your data in customized versions of the training/testing scripts for classification.

@tsly123
Copy link
Author

tsly123 commented Jan 10, 2019

Thank you @TSchattschneider ,
So far, I've tried to run the mnist, cifar examples and now I am struggling to extract the feature from FC and also xconv layers. Do you know how to extract feature from PointCNN pretrained network?
Thank you.
tsly

@burui11087
Copy link
Collaborator

@tsly123

You can get layers features like following method

logits = net.logits

@tsly123
Copy link
Author

tsly123 commented Jan 10, 2019

Thank you @burui11087 for your prompt reply but i am facing another problem.
As you see, my ply files quite large (> 150k points), i want to down-sample by using the fps algorithm tf_sampling in sampling folder but I got the core dumped or other errors. I looked in the other issues and see you talk about switching to native python and others. I've tried them all, even build on a fresh docker, still get the problems.
Could you provide the docker or instruction to compiling the tf_sampling?
Thank you very much
tsly

@burui11087
Copy link
Collaborator

@tsly123
Hi, please use this command docker pull tensorflow/tensorflow:1.6.0-devel-gpu-py3 and install nvidia-docker environment.

@tsly123
Copy link
Author

tsly123 commented Jan 22, 2019

Hi,
I apologize to bother you again. I am struggling to extract feature from xconv layer before fc0 (and hopefully fc_mean layer) in order of train_paths not shuffling the dataset. I tried to code and faced many errors. Below is one of my trials.

with tf.Graph().as_default():
    with tf.Session() as sess:
        new_saver = tf.train.import_meta_graph(meta_path)
        new_saver.restore(sess, ckpt_path)
        input_fc0 = tf.get_default_graph().get_tensor_by_name('xconv_6_fts_list_concat:0')
        input_fc0 = tf.nn.l2_normalize(input_fc0,2)
        input_fc0_size = input_fc0.get_shape()[-1]
        images_placeholder = tf.get_default_graph().get_tensor_by_name("data_train:0")
        phase_train_placeholder = tf.get_default_graph().get_tensor_by_name("is_training:0")
        handle = tf.get_default_graph().get_tensor_by_name("handle:0")
        
        emb_array = np.zeros((len(train_paths), 128, input_fc0_size))
        array = np.zeros((26929, 4096, 6))  #26929 = nb of samples
        for idx, arr_path in enumerate(train_paths):
            array_file = np.loadtxt(arr_path, delimiter=' ')
            array = np.zeros((array_file.shape[0], 6))  
            array[:,:3] = array_file
            array[:,-1] = 1
        
        dataset = tf.data.Dataset.from_tensor_slices((array))
        iterator = dataset.make_one_shot_iterator()
        handle_feed = sess.run(iterator.string_handle())

        for i in range(0, len(train_paths)):
            inputs = iterator.get_next()
            feed_dict = { images_placeholder:inputs, handle: handle_feed, phase_train_placeholder:False }
            arr = sess.run(input_fc0, feed_dict=feed_dict)

            emb_array[i,:] = arr
        
        emb_array2 = np.mean(emb_array, axis = 1).reshape(len(train_paths), input_fc0_size)    

and it raise couldn't create tensor over 2.0Gb. I've also tried to break the data into batches but it keep continuing raise difference errors, such as couldn't feed a sample shape (4096, 6) to 'data_train:0' which has shape (26926, 4096, 6).
Could you give me and example of extracting feature from xconv layer? I'm new to tensorflow framework.

Thank you.
tsly

@burui11087 burui11087 reopened this Jan 22, 2019
@burui11087
Copy link
Collaborator

hi @tsly123

you can return the fts_X variable in pointcnn.py such as

fts_X = nn_fts_input

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants