You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the solution you'd like
I want to have an actual implementation of the boosting framework presented in this ^paper, for the purpose of furthering my own understanding.
Describe alternatives you've considered
So far it looks like TensorFlow Decision Forests has a module similar to what I'm looking for - but it looks like overkill to use it, subclass it, all to use a new kind of optimization algorithm (based on the paper I'm reading).
Additional context
I'm interested to understand this library better, so I've actually started trying to implement this on my fork. So far it includes (what I think) is the code for the model) and a demo notebook.
The main blockers I think I have right now are the following:
reproducibility - when trying to run my current implementation (i.e., in adaboost.py) in the notebook (see cell 12 of plot_adaboost_twoclass.ipynb), it always seems to "bounce around" in training accuracy. E.g. 43% for the first run, then 47%, and eventually 0.00% within ~5 runs. I'm trying to understand if this is an expected result, and if not what can be done to fix it?
testing - how should I go about writing tests for the adaboost.py?
Any other feedback/questions are appreciated!
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
I've been trying to understand the paper, "Efficient, Noise-Tolerant, and Private Learning via Boosting" and it is difficult as the work is purely theoretical.
Describe the solution you'd like
I want to have an actual implementation of the boosting framework presented in this ^paper, for the purpose of furthering my own understanding.
Describe alternatives you've considered
So far it looks like TensorFlow Decision Forests has a module similar to what I'm looking for - but it looks like overkill to use it, subclass it, all to use a new kind of optimization algorithm (based on the paper I'm reading).
Additional context
adaboost.py
) in the notebook (see cell12
ofplot_adaboost_twoclass.ipynb
), it always seems to "bounce around" in training accuracy. E.g.43%
for the first run, then47%
, and eventually0.00%
within ~5 runs. I'm trying to understand if this is an expected result, and if not what can be done to fix it?adaboost.py
?Any other feedback/questions are appreciated!
The text was updated successfully, but these errors were encountered: