You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes I have to read ARFF files and pre-process them, and in those cases, the measures calculated by mldr are not useful, and also take a lot of time to calculate. I tried the function read.arff from RWeka, but it does not seem to discriminate between features and labels, as mldr does. Therefore, it would be nice if mldr allowed me to just read a dataset, without calculating the measures, just providing me a data.frame and the indexes of features and labels. So, after pre-processing the data.frame, I would build an mldr object by means of mldr_from_dataframe. Thanks for mldr!
The text was updated successfully, but these errors were encountered:
That sounds like a great idea. I'll leave this issue open to track the development of this feature (which will be as soon as possible, of course). Thanks for the suggestion @alessanderbotti!
This functionality has been added in bb53bb1. Now a read.arff function is available within the mldr namespace, which allows to read multilabel data and get label indices without calculating measures. This function does not return an "mldr" object (since mldr methods cannot be applied when measures aren't available), but instead returns a list with the convenient data.
Sometimes I have to read ARFF files and pre-process them, and in those cases, the measures calculated by mldr are not useful, and also take a lot of time to calculate. I tried the function read.arff from RWeka, but it does not seem to discriminate between features and labels, as mldr does. Therefore, it would be nice if mldr allowed me to just read a dataset, without calculating the measures, just providing me a data.frame and the indexes of features and labels. So, after pre-processing the data.frame, I would build an mldr object by means of mldr_from_dataframe. Thanks for mldr!
The text was updated successfully, but these errors were encountered: