Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GSOC]DatasetMapper & Imputer #694

Merged
merged 47 commits into from Jul 25, 2016
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
87c05a5
concept work for imputer
keon Jun 1, 2016
2e4b1a8
Merge branch 'master' of github.com:keonkim/mlpack into imputer
keon Jun 6, 2016
631e59e
do not to use NaN by default, let the user specify
keon Jun 6, 2016
391006e
Merge branch 'master' of github.com:keonkim/mlpack into imputer
keon Jun 6, 2016
6a1fb81
add template to datasetinfo and add imputer class
keon Jun 12, 2016
b0c5224
clean datasetinfo class and rename files
keon Jun 13, 2016
de35241
implement basic imputation strategies
keon Jun 13, 2016
2d38604
modify imputer_main and clean logs
keon Jun 13, 2016
bb045b8
add parameter verification for imputer_main
keon Jun 13, 2016
1295f4b
add custom strategy to impute_main
keon Jun 13, 2016
5a517c2
add datatype change in IncrementPolicy
keon Jun 14, 2016
94b7a5c
update types used in datasetinfo
keon Jun 14, 2016
ebed68f
initialize imputer with parameters
keon Jun 14, 2016
db78f39
remove datatype in dataset_info
keon Jun 15, 2016
7c60b97
Merge branch 'master' of github.com:keonkim/mlpack into imputer
keon Jun 15, 2016
da4e409
add test for imputer
keon Jun 15, 2016
d8618ec
restructure, add listwise deletion & imputer tests
keon Jun 18, 2016
3b8ffd0
fix transpose problem
keon Jun 27, 2016
90a5cd2
Merge pull request #7 from mlpack/master
keon Jun 27, 2016
32c8a73
merge
keon Jun 27, 2016
e09d9bc
updates and fixes on imputation methods
keon Jun 28, 2016
87d8d46
update data::load to accept different mappertypes
keon Jul 1, 2016
de0b2db
update data::load to accept different policies
keon Jul 1, 2016
bc187ca
add imputer doc
keon Jul 1, 2016
a340f69
debug median imputation and listwise deletion
keon Jul 2, 2016
21d94c0
remove duplicate code in load function
keon Jul 2, 2016
a92afaa
delete load overload
keon Jul 3, 2016
bace8b2
modify MapToNumerical to work with MissingPolicy
keon Jul 4, 2016
896a018
MissingPolicy uses NaN instead of numbers
keon Jul 4, 2016
1a908c2
fix reference issue in DatasetMapper
keon Jul 4, 2016
2edbc40
Move MapToNumerical(MapTokens) to Policy class
keon Jul 5, 2016
d881cb7
make policy and imputation api more consistent
keon Jul 5, 2016
a881831
numerical values can be set as missing values
keon Jul 6, 2016
63268a3
add comments and use more proper names
keon Jul 7, 2016
2eb6754
modify custom impute interface and rename variables
keon Jul 10, 2016
6d43aa3
add input-only overloads to imputation methods
keon Jul 10, 2016
fedc5e0
update median imputation to exclude missing values
keon Jul 11, 2016
787fd82
optimize imputation methods with output overloads
keon Jul 18, 2016
a0b7d59
expressive comments in imputation_test
keon Jul 18, 2016
9a6dce7
shorten imputation tests
keon Jul 18, 2016
c3aeba1
optimize preprocess imputer executable
keon Jul 18, 2016
028c217
fix bugs in imputation test
keon Jul 18, 2016
03e19a4
add more comments and delete impute_test.csv
keon Jul 22, 2016
ef4536b
Merge pull request #8 from mlpack/master
keon Jul 22, 2016
6e2c1ff
Merge branch 'master' of github.com:keonkim/mlpack into imputer
keon Jul 22, 2016
5eb9abd
fix PARAM statements in imputer
keon Jul 22, 2016
d043235
delete Impute() overloads that produce output matrix
keon Jul 23, 2016
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 4 additions & 2 deletions src/mlpack/core/data/dataset_info.hpp
Expand Up @@ -35,9 +35,9 @@ class DatasetMapper
* the dimensionality cannot be changed later; you will have to create a new
* DatasetMapper object.
*/
DatasetMapper(const size_t dimensionality = 0);
explicit DatasetMapper(const size_t dimensionality = 0);

DatasetMapper(PolicyType& policy, const size_t dimensionality = 0);
explicit DatasetMapper(PolicyType& policy, const size_t dimensionality = 0);
/**
* Given the string and the dimension to which it belongs, return its numeric
* mapping. If no mapping yet exists, the string is added to the list of
Expand Down Expand Up @@ -101,6 +101,8 @@ class DatasetMapper
ar & data::CreateNVP(maps, "maps");
}

PolicyType& Policy() const;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the return type should be PolicyType const&, else the compiler may able to compile(this make sense, const member function return reference non-const data member is weird).

Try to compile following codes, you will find it cannot compile

struct testClass
{
    int& getValue() const
    {
        return a;
    }

    int a;
};


private:
//! Types of each dimension.
std::vector<Datatype> types;
Expand Down
6 changes: 6 additions & 0 deletions src/mlpack/core/data/dataset_info_impl.hpp
Expand Up @@ -115,6 +115,12 @@ inline size_t DatasetMapper<PolicyType>::Dimensionality() const
return types.size();
}

template<typename PolicyType>
inline PolicyType& DatasetMapper<PolicyType>::Policy() const
{
return this->policy;
}

} // namespace data
} // namespace mlpack

Expand Down
6 changes: 5 additions & 1 deletion src/mlpack/core/data/load.hpp
Expand Up @@ -96,7 +96,11 @@ bool Load(const std::string& filename,
arma::Mat<eT>& matrix,
DatasetMapper<PolicyType>& info,
const bool fatal = false,
const bool transpose = true);
const bool transpose = true)
{
PolicyType policy;
return Load(filename, matrix, info, policy, fatal, transpose);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since you already provide an api to access the policy of the DatasetMapper, I think we could remove another Load function which allow user to pass the policy. This would make the api more consistent.

You can create the DatasetMapper in the Load function without the policy parameters pass

info = info = DatasetMapper<PolicyType>(info.Policy(), cols);

By this way users only need to store the state in their DatasetMapper only, I think this is less confusing.Else the users may think
"I already store my policy in the DatasetMapper, why should I pass the policy into the Load function again?"

If you want to allow the users to get/set their policy states, you can add two api to the DatasetMapper

PolicyType const& Policy() const
{
  return policy;
}
void Policy(PolicyType value)
{
  policy = std::move(value);
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the feedback!
I think the modifier in mlpack in general only returns the reference of the member value.
should I provide another void Policy(PolicyType value) member function for more usability?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should I provide another void Policy(PolicyType value) member function for more usability?

I think it is a good idea

}

/**
* Loads a matrix from a file, guessing the filetype from the extension and
Expand Down
167 changes: 0 additions & 167 deletions src/mlpack/core/data/load_impl.hpp
Expand Up @@ -538,173 +538,6 @@ bool Load(const std::string& filename,
return true;
}


// Load with mappings. Unfortunately we have to implement this ourselves.
template<typename eT, typename PolicyType>
bool Load(const std::string& filename,
arma::Mat<eT>& matrix,
DatasetMapper<PolicyType>& info,
const bool fatal,
const bool transpose)
{
// Get the extension and load as necessary.
Timer::Start("loading_data");

// Get the extension.
std::string extension = Extension(filename);

// Catch nonexistent files by opening the stream ourselves.
std::fstream stream;
stream.open(filename.c_str(), std::fstream::in);

if (!stream.is_open())
{
Timer::Stop("loading_data");
if (fatal)
Log::Fatal << "Cannot open file '" << filename << "'. " << std::endl;
else
Log::Warn << "Cannot open file '" << filename << "'; load failed."
<< std::endl;

return false;
}

if (extension == "csv" || extension == "tsv" || extension == "txt")
{
// True if we're looking for commas; if false, we're looking for spaces.
bool commas = (extension == "csv");

std::string type;
if (extension == "csv")
type = "CSV data";
else
type = "raw ASCII-formatted data";

Log::Info << "Loading '" << filename << "' as " << type << ". "
<< std::flush;
std::string separators;
if (commas)
separators = ",";
else
separators = " \t";

// We'll load this as CSV (or CSV with spaces or tabs) according to
// RFC4180. So the first thing to do is determine the size of the matrix.
std::string buffer;
size_t cols = 0;

std::getline(stream, buffer, '\n');
// Count commas and whitespace in the line, ignoring anything inside
// quotes.
typedef boost::tokenizer<boost::escaped_list_separator<char>> Tokenizer;
boost::escaped_list_separator<char> sep("\\", separators, "\"");
Tokenizer tok(buffer, sep);
for (Tokenizer::iterator i = tok.begin(); i != tok.end(); ++i)
++cols;

// Now count the number of lines in the file. We've already counted the
// first one.
size_t rows = 1;
while (!stream.eof() && !stream.bad() && !stream.fail())
{
std::getline(stream, buffer, '\n');
if (!stream.fail())
++rows;
}

// Now we have the size. So resize our matrix.
if (transpose)
{
matrix.set_size(cols, rows);
info = DatasetMapper<PolicyType>(cols);
}
else
{
matrix.set_size(rows, cols);
info = DatasetMapper<PolicyType>(rows);
}

stream.close();
stream.open(filename, std::fstream::in);

if(transpose)
{
std::vector<std::vector<std::string>> tokensArray;
std::vector<std::string> tokens;
while (!stream.bad() && !stream.fail() && !stream.eof())
{
// Extract line by line.
std::getline(stream, buffer, '\n');
Tokenizer lineTok(buffer, sep);
tokens = details::ToTokens(lineTok);
if(tokens.size() == cols)
{
tokensArray.emplace_back(std::move(tokens));
}
}
for(size_t i = 0; i != cols; ++i)
{
details::TransPoseTokens(tokensArray, tokens, i);
details::MapToNumerical(tokens, i,
info, matrix);
}
}
else
{
size_t row = 0;
while (!stream.bad() && !stream.fail() && !stream.eof())
{
// Extract line by line.
std::getline(stream, buffer, '\n');
Tokenizer lineTok(buffer, sep);
details::MapToNumerical(details::ToTokens(lineTok), row,
info, matrix);
++row;
}
}
}
else if (extension == "arff")
{
Log::Info << "Loading '" << filename << "' as ARFF dataset. "
<< std::flush;
try
{
LoadARFF(filename, matrix, info);

// We transpose by default. So, un-transpose if necessary...
if (!transpose)
inplace_transpose(matrix);
}
catch (std::exception& e)
{
if (fatal)
Log::Fatal << e.what() << std::endl;
else
Log::Warn << e.what() << std::endl;
}
}
else
{
// The type is unknown.
Timer::Stop("loading_data");
if (fatal)
Log::Fatal << "Unable to detect type of '" << filename << "'; "
<< "incorrect extension?" << std::endl;
else
Log::Warn << "Unable to detect type of '" << filename << "'; load failed."
<< " Incorrect extension?" << std::endl;

return false;
}

Log::Info << "Size is " << (transpose ? matrix.n_cols : matrix.n_rows)
<< " x " << (transpose ? matrix.n_rows : matrix.n_cols) << ".\n";

Timer::Stop("loading_data");

return true;
}

// Load a model from file.
template<typename T>
bool Load(const std::string& filename,
Expand Down