I started copying some of my old unit tests. But it's changed enough to make this a non-trivial task, so I'm deferring the rest until I can do it properly. Also, I augmented the steelStrain dataset to include exy and eyy columns in addition to exx column.
I want my data generation to be more reproducible and transparent, so I made a script that does what I want it to with the raw data for the steelStrain dataset. It will be tracked in github, but the datafile will be excluded from the package build (it's 5.1 MB; that's too big).
I was a little overzealous adding 'gppois:::' before each private method. This appears to be unnecessary for code that is WITHIN the package. (Code "outside" the package, such as the steelStrain.R demo, still needs this qualifier.)
R.oo provides syntactic-sugar field access: e.g., instead of Method(this=object, ...) you can write object$Method(...) which is a more "object oriented" way of doing things. Unfortunately, this seems to rely on the fully qualified method name, i.e. Method.class being exported to the namespace. Private methods should *not* be exported to the namespace. Hence, we can't use the "nice" way to access private methods.
Apparently I need to export both the generic function and the fully qualified name for each class, or else the R.oo syntactic sugar won't work. This commit doesn't represent a working state, because I need to fix the private files, but it should be OK for the public ones.
- Added export statements for print and clone method - For now, I have to manually add the constructors to NAMESPACE, because they use R.oo instead of roxygen. - Gave in to the Collate message in the DESCRIPTION file, which the newer devtools generates
Bug: certain Models being trained would get stuck at the boundaries. Turns out, this was because 'optim' *ignores* the names on the 'lower' and 'upper' vectors, so boundaries weren't necessarily matched with the corresponding parameters. Fix: caused getLower and getUpper to omit values for constant parameters, in accordance with the behaviour of getParams.
The optimizer seems to be getting confused when parameter values get clamped. I think I'll have better results if I just let L-BFGS-B take care of the boundaries. Expand the Model's bounds ($lower and $upper) during training, then bring them back once we're done.
First code relating to a demo for the package. Made a demo directory and script. Other changes include: Dataset.R: Improved Plot2D function: you can now adjust the vertical scale, and it doesn't force you to clear the plot utils.R: Made a "pause" function for the demo Covariance.R: Removed a useless line Model training is not working for aniso2D, but it works quite well for regular SE. In particular, it always maximizes the sigma's (and the ell's are correspondingly too large). I've verified that the parameter values from DEoptim have significantly higher LogML than the parameters where it's getting stuck. Maybe it's some kind of boundary issue? Maybe try enlarging the Model's bounds, then shrinking them back down? L-BFGS-B really shouldn't propose outside the bounds, so we should still be OK...