Skip to content

Conversation

@berndbischl
Copy link
Member

@berndbischl berndbischl commented Jan 9, 2020

docs and unit tests are still missing

@berndbischl berndbischl added this to the 0.2 milestone Jan 9, 2020
@berndbischl
Copy link
Member Author

i added some comments docs and tests. note that i had to change some existing tests (slightly)

@berndbischl berndbischl requested a review from mllg January 9, 2020 03:42
@jakob-r
Copy link
Member

jakob-r commented Jan 9, 2020

I have the gut feeling that we are trying to fix something here where we intentionally decided to not do it.

And please do not merge until grammar and spelling is fixed! (edit: spelling in comments does not matter)

Copy link
Member

@jakob-r jakob-r left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spelling (at least) (see comments)

@berndbischl
Copy link
Member Author

I have the gut feeling that we are trying to fix something here where we intentionally decided to not do it

well, get on mattermost or describe in the issue what i am supposed to do intead. i wrote in the issue that i am also not very fond of this....

@berndbischl
Copy link
Member Author

Spelling (at least)

what exactly do you mean? do you want to iterate isnt that easier?

@berndbischl
Copy link
Member Author

also note that travis isnt green and i dont know why

@pat-s
Copy link
Member

pat-s commented Jan 9, 2020

I can reproduce the Travis issue locally when running devtools::test().

Looks like an issue triggered by _R_CHECK_LENGTH_1_CONDITION_=true or similar checks. Even though I also get it when setting this one to FALSE.

@berndbischl
Copy link
Member Author

can reproduce the Travis issue locally when running devtools::test().

why dont i get this?

@berndbischl
Copy link
Member Author

can reproduce the Travis issue locally when running devtools::test().

why dont i get this?

i found the bug by looking at the diffs / travis / messages and thinking. i fixed it now, travis really found an honest bug. but i am really worried that i still cannot see this locally. the bug is basically and instance where i get a logical vec of length 2, and then do an "if(foo)" on this, so it should have been a vector of length 1. doesnt R at least in these instances issue warning? @mllg

@codecov-io
Copy link

codecov-io commented Jan 9, 2020

Codecov Report

Merging #260 into master will increase coverage by 0.03%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #260      +/-   ##
==========================================
+ Coverage    95.4%   95.44%   +0.03%     
==========================================
  Files          20       20              
  Lines         501      505       +4     
==========================================
+ Hits          478      482       +4     
  Misses         23       23
Impacted Files Coverage Δ
R/ParamSet.R 97.29% <100%> (+0.07%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 65abc35...f0b4a06. Read the comment docs.

@berndbischl
Copy link
Member Author

@jakob-r

spelling:
i mean i added 1-2 lines in docs? i re-read them now and can really not spot any typos. pls look at it for 2 min i would look to merge then.

your general concerns:
if you think this is bad, and rather would like to veto it pls speak very soonish. but you must address the issue i guess and say what we should do instead

@jakob-r
Copy link
Member

jakob-r commented Jan 9, 2020

Why this is potentially bad:
We assume that the default in the Param is set correctly.
This was sometimes not the case in the past and it is hard to test it if it's correct. Therefore I would not rely on it. Our stance on the default Param was always "it's meant more as some kind documentation to help the user". Introducing this change would heavily increase the significance of the default.
If the default is wrongly defined we risk that the true parameters used internally in the learner diverge from what the user implicitly assumes. This is especially critical since the learners often take infeasible parameter settings without complaining.

My preferred solution: If I want to use a parameter that depends on another parameter I have to set the other parameter.
Advantages (I will follow the example in #259) :

  1. He already gets an error message that is helpfull
    #> Error in (function (xs) : Assertion on 'xs' failed: Condition for 'cost' not ok: type equal C-classification; instead: type=<not-there>.
  2. It is usually just 1 line the user has to add:
    learner$param_set$values$type = "C-classification"
  3. The user is completely aware of what happens.

I still think we already had this discussion.

@jakob-r
Copy link
Member

jakob-r commented Jan 9, 2020

Also:

How do we deal with concurrent settings in lrn$param_set$values and the defaults.
(Hint: Best idea would be not having to worry about it and let the user decide)

@berndbischl
Copy link
Member Author

I still think we already had this discussion.

not sure, at leat we had a very similar one. i think what we discussed was more state-changes of the param-values list. this only concerns feasibility checks.

anyway thx for your longer post. i am tending to begin to agree with you. @mllg @mb706 wha do you think? pls post soon your opinion. if we follow @jakob-r, we would need to clean up / improve messages and docs about this stuff.

@jakob-r
Copy link
Member

jakob-r commented Jan 10, 2020

we would need to clean up / improve messages and docs about this stuff.

Yes. But it should be fairly simple to check whether a ParamSet has a dependency that can not be resolved. e.g. when the TuningInstance is initialized. To solve this elegantly does not seem to be trivial to me unfortunately: Where should the code for the check live? Within the ParamSet because it is a general operation to check if two ParamSets are compatible (Learner PS and Tuning PS) or is it custom mlr3(tuning) code?

@berndbischl
Copy link
Member Author

Yes. But it should be fairly simple to check whether a ParamSet has a dependency that can not be resolved. e.g. when the TuningInstance is initialized. To solve this elegantly does not seem to be trivial to me unfortunately (weher should the code for the check live? Within the ParamSet because it is a general operation or is ist cutom mlr3(tuning) code?

i dont think this is hard to do. the code is nearly there already. for each param B we know wether there is a condition which has this param as parent. if the parent is not present in the set, we can generate an error. that check function can live in paradox. whether we call it in the constructor of the param set or in the TuningInstance, i dont know. why not in the constructor of the PS? otherwise such a set is never valid right?

@jakob-r
Copy link
Member

jakob-r commented Jan 10, 2020

whether we call it in the constructor of the param set or in the TuningInstance, i dont know. why not in the constructor of the PS?

Because the learner PS is not present then.

@berndbischl
Copy link
Member Author

Because the learner PS is not present then.

but we dont currently use this for anything?

@jakob-r
Copy link
Member

jakob-r commented Jan 10, 2020

Because the learner PS is not present then.

but we dont currently use this for anything?

The requirements are encoded in the learner PS but not in the tuning PS?

(Are we talking past each other?)

@jakob-r
Copy link
Member

jakob-r commented Jan 17, 2020

I added an alternative solution in #262. Feedback is welcome @berndbischl @mllg @pat-s

@jakob-r
Copy link
Member

jakob-r commented Feb 5, 2020

Proposal: Close this issue (we now have a better error message)

@pat-s pat-s closed this Feb 13, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants