Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No conversion possible and could not execute command error on default datasets for LSH #12

Closed
mindisk opened this issue Mar 26, 2016 · 6 comments

Comments

@mindisk
Copy link

mindisk commented Mar 26, 2016

Hello,

I encountered an issue to which I do not know the solution, thus I hope someone will be able to help me here.
I am trying to run default benchmark on LSH method of mlpack library and I get No conversion possible error followed by Could not execute command. I tried running default benchmarks also on other methods and I get similar result.

Am I missing something?

Full output of running default benchmark on LSH method of mlpack library:

/usr/local/bin/python3 benchmark/run_benchmark.py -c config.yaml -b mlpack -l False -u False -m LSH --f "" --n False
[INFO ] CPU Model:  Intel(R) Core(TM) i5-5300U CPU @ 2.30GHz
[INFO ] Distribution: debian jessie/sid
[INFO ] Platform: x86_64
[INFO ] Memory: 7.50390625 GB
[INFO ] CPU Cores: 4
[INFO ] Method: LSH
[INFO ] Options: -k 3 -s 42
[INFO ] Library: mlpack
[INFO ] Dataset: wine
[FATAL] No conversion possible.
[FATAL] Could not execute command: ['mlpack_lsh', '-r', '-v', '-k', '3', '-s', '42']
[INFO ] Dataset: cloud
[FATAL] No conversion possible.
[FATAL] Could not execute command: ['mlpack_lsh', '-r', '-v', '-k', '3', '-s', '42']
[INFO ] Dataset: wine
[FATAL] No conversion possible.
[FATAL] Could not execute command: ['mlpack_lsh', '-r', '-v', '-k', '3', '-s', '42']
[INFO ] Dataset: isolet
[FATAL] No conversion possible.
[FATAL] Could not execute command: ['mlpack_lsh', '-r', '-v', '-k', '3', '-s', '42']
[INFO ] Dataset: corel-histogram
[FATAL] No conversion possible.
[FATAL] Could not execute command: ['mlpack_lsh', '-r', '-v', '-k', '3', '-s', '42']
[INFO ] Dataset: covtype
[FATAL] No conversion possible.
[FATAL] Could not execute command: ['mlpack_lsh', '-r', '-v', '-k', '3', '-s', '42']
[INFO ] Dataset: 1000000-10-randu
[FATAL] No conversion possible.
[FATAL] Could not execute command: ['mlpack_lsh', '-r', '-v', '-k', '3', '-s', '42']
[INFO ] Dataset: mnist
[FATAL] No conversion possible.
[FATAL] Could not execute command: ['mlpack_lsh', '-r', '-v', '-k', '3', '-s', '42']
[INFO ] Dataset: Twitter
[FATAL] No conversion possible.
[FATAL] Could not execute command: ['mlpack_lsh', '-r', '-v', '-k', '3', '-s', '42']
[INFO ] Dataset: tinyImages100k
[FATAL] No conversion possible.
[FATAL] Could not execute command: ['mlpack_lsh', '-r', '-v', '-k', '3', '-s', '42']

                    mlpack 
wine               failure 
cloud              failure 
isolet             failure 
corel-histogram    failure 
covtype            failure 
1000000-10-randu   failure 
mnist              failure 
Twitter            failure 
tinyImages100k     failure
@zoq
Copy link
Member

zoq commented Mar 26, 2016

Can you check that the datasets folder isn't empty? If that's the case there are three options:

  1. You could download the datasets repository using: 'git submodule update --init'. Sometimes this dosn't work, because bitbucket dosn't work well with huge datasets.
  2. Download the dataset repository as zip: https://bitbucket.org/zoqbits/benchmark-data/get/1f7770e39c61.zip
  3. Manually download the datasets you need: https://bitbucket.org/zoqbits/benchmark-data/src

@mindisk
Copy link
Author

mindisk commented Mar 26, 2016

Oh,yes, it was empty. I didn't even realized it. I downloaded the datasets and ran the benchmark again and now I do not have the No conversion possible. However, I still get the following error on all datasets that are used for benchmarking LSH
[FATAL] Could not execute command: ['mlpack_lsh', '-r', 'datasets/wine.csv', '-v', '-k', '3', '-s', '42']
Any ideas how to resolve it?

@zoq
Copy link
Member

zoq commented Mar 26, 2016

Ah, I guess since mlpack changed the names from lsh to mlpack_lsh the auto detection doesn't work anymore. I'll go and fix this in the next days. However you can manually specify the mlpack binary path using the MLPACK_BIN parameter:

make MLPACK_BIN=/usr/local/bin/ run

@mindisk
Copy link
Author

mindisk commented Mar 28, 2016

After running make MLPACK_BIN=/usr/local/bin/ run BLOCK=mlpack METHODBLOCK=LSH command, I get similar error:

[FATAL] Could not execute command: ['/usr/local/bin/mlpack_lsh', '-r','datasets/wine.csv', '-v', '-k', '3', '-s', '42']

Now, I've ran the mlpack_lsh manually with the same parameters as the benchmark script: mlpack_lsh -r datasets/wine.csv -d distances.csv -n neighbours.csv -v -k 5 -s 42. It seems that I am missing to provide the query file

[DEBUG] Compiled with debugging symbols.
[FATAL] Both --query_file and --k must be specified if search is to be done!
terminate called after throwing an instance of 'std::runtime_error'
  what():  fatal error; see Log::Fatal output
Aborted (core dumped)

So, if I create a query_file where I specify at least one query point and then it executes without trouble
mlpack_lsh -r datasets/wine.csv -d distances.csv -n neighbours.csv -q datasets/wines_query_point.csv -v -k 5 -s 42

[DEBUG] Compiled with debugging symbols.
[INFO ] Using LSH with 10 projections (K) and 30 tables (L) with default hash width.
[INFO ] Loading 'datasets/wine.csv' as CSV data.  Size is 13 x 178.
[INFO ] Loaded reference data from 'datasets/wine.csv' (13 x 178).
[INFO ] Hash width chosen as: 19.4285
[INFO ] Final hash table size: (5051 x 4)
[INFO ] Computing 5 distance approximate nearest neighbors.
[INFO ] Loading 'datasets/wines_query_point.csv' as CSV data.  Size is 13 x 1.
[INFO ] Loaded query data from 'datasets/wines_query_point.csv' (13 x 1).
[INFO ] 1 distinct indices returned on average.
[INFO ] Neighbors computed.
[INFO ] Saving CSV data to 'distances.csv'.
[INFO ] Saving CSV data to 'neighbours.csv'.
[INFO ] 
[INFO ] Execution parameters:
[INFO ]   bucket_size: 500
[INFO ]   distances_file: distances.csv
[INFO ]   hash_width: 0
[INFO ]   help: false
[INFO ]   info: ""
[INFO ]   input_model_file: ""
[INFO ]   k: 5
[INFO ]   neighbors_file: neighbours.csv
[INFO ]   output_model_file: ""
[INFO ]   projections: 10
[INFO ]   query_file: datasets/wines_query_point.csv
[INFO ]   reference_file: datasets/wine.csv
[INFO ]   second_hash_size: 99901
[INFO ]   seed: 42
[INFO ]   tables: 30
[INFO ]   verbose: true
[INFO ]   version: false
[INFO ] 
[INFO ] Program timers:
[INFO ]   computing_neighbors: 0.000079s
[INFO ]   hash_building: 0.143197s
[INFO ]   loading_data: 0.001300s
[INFO ]   saving_data: 0.000163s
[INFO ]   total_time: 0.146325s

So the question is, does the benchmark script provides a query point file for the mlpack_lsh? It seems, it doesn't, can this be the issue?

@mindisk mindisk changed the title No conversion possible error on all default datasets No conversion possible and could not execute command error on default datasets for LSH Mar 28, 2016
@zoq
Copy link
Member

zoq commented Apr 1, 2016

Sorry for the slow response. You are right in the latest version you have to specify a query set. Which shouldn't be the case:

"You may specify a separate set of reference points and query points, or just a reference set which will be used as both the reference and query set."

I'll will fix that in mlpack in the next days. What you could do in the meantime is to modify the mlpack lsh.py script.

instead of using:

cmd = shlex.split(self.path + "mlpack_lsh -r " + self.dataset + " -v " + options)

we could write:

cmd = shlex.split(self.path + "mlpack_lsh -r " + self.dataset + " -q " + self.dataset + " -v " + options)

@zoq
Copy link
Member

zoq commented Mar 12, 2017

@zoq zoq closed this as completed Mar 12, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants