Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add missing metrics to GLM scoring history #8459

Closed
exalate-issue-sync bot opened this issue May 11, 2023 · 19 comments
Closed

Add missing metrics to GLM scoring history #8459

exalate-issue-sync bot opened this issue May 11, 2023 · 19 comments

Comments

@exalate-issue-sync
Copy link

We're adding a new feature for h2o, which is [learning curve plots|https://github.com//pull/5164] (to evaluate overfitting of models - something to add to the explainabilty suite). it plots the validation, CV and training error over the learning iterations of a model (e.g number_of_trees, iterations). However, one thing that’s not working properly is the Stacked Ensemble & GLM learning curve plots. For Stacked Ensemble, this is because we show the metalearner learning curve, which is GLM.

GLM is being it’s usual special self, and is not in sync with the rest of the H2O models. The GLM scoring history is missing

  • CV metrics (if lambda search is turned on it will use the {{objective}} metric; it will show it otherwise). See plot with missing CV/valid metrics below. For {{objective}} metric, there’s no validation scoring history, so we can plot only the “training” part of the CV.
  • it’s missing most of the standard metrics altogether (AUC, logloss, mse, rmse, etc).

It would be great to release the learning curve plots with full functionality.

@exalate-issue-sync
Copy link
Author

Tomas Fryda commented: I enumerated different metrics under different scenarios (regression, binomial, multinomial) and which algos use them, maybe it will be helpful to you. [https://github.com//pull/5164#issuecomment-744463257|https://github.com//pull/5164#issuecomment-744463257]

About the CV metrics
I store the scoring histories of the cv models so the issue there is that we don’t have validation. I think the issue doesn’t concern GLM with lambda search but it does all the other - with lambda search we have {{deviance_train}}, {{deviance_validation}} but without lambda search we have just {{objective}} or {{convergence}} (HGLM).

@exalate-issue-sync
Copy link
Author

Wendy commented: [~accountid:5e43370f5a495e0c91a74ebe] I have run GLM with valiation_frame and set score_each_iteration=True. I can see here is the scoring history header:

!image-20210119-212038.png|width=379,height=447!

Here is the scoring history value:

!image-20210119-212110.png|width=1470,height=291!

Is that what you want? It is already here for when lambda_search=False. I am not worrying about CV as you mentioned that you are taking care of it.

@exalate-issue-sync
Copy link
Author

Wendy commented: However, I do see error when the score_each_interval is not set to true. I will fix it.

@exalate-issue-sync
Copy link
Author

Tomas Fryda commented: Thank you [~accountid:557058:1f01b471-f37b-40af-bae9-a18b38e24549] ! It looks to me like fixing the error will fulfill this JIRA. I am not sure which metrics are implemented in GLM for which tasks (regression, binomial, multinomial) so I made a table that shows what I can expect from other algos (DRF, DeepLearning,…) and when. The {{opt-in in multinomial}} is recent addition by [~accountid:5bd237b8dd3cc64b77e71676] so I am not sure if GLM/GAM is supported.

{noformat} Task Stopping Metric Name Training Scoring History Name Validation Scoring History Name
0 binomial lift_top_group training_lift validation_lift
1 binomial, + opt-in in multinomial AUC training_auc validation_auc
2 binomial, + opt-in in multinomial AUCPR training_pr_auc validation_pr_auc
3 binomial,multinomial logloss training_logloss validation_logloss
4 binomial,multinomial misclassification training_classification_error validation_classification_error
5 binomial,multinomial,regression RMSE training_rmse validation_rmse
6 regression MAE training_mae validation_mae{noformat}

@exalate-issue-sync
Copy link
Author

Wendy commented: [~accountid:5e43370f5a495e0c91a74ebe] The list is very helpful. Thanks. I will deal with it this weekend. However, I am going to give you a few pointers in case you cannot wait.

@exalate-issue-sync
Copy link
Author

Wendy commented: When CV and lambda_search enabled, here is the code in GLM.java around line 144:

{noformat}public void cv_computeAndSetOptimalParameters(ModelBuilder[] cvModelBuilders) {
if(_parms._max_runtime_secs != 0) _parms._max_runtime_secs = 0;
_xval_deviances = new double[_parms._lambda.length*_parms._alpha.length];
_xval_sd = new double [_parms._lambda.length*_parms._alpha.length];
double bestTestDev = Double.POSITIVE_INFINITY;
int lmin_max = 0;
for (int i = 0; i < cvModelBuilders.length; ++i) { // find the highest best_submodel_idx we need to go through
GLM g = (GLM) cvModelBuilders[i];
lmin_max = Math.max(lmin_max,g._model._output._selected_submodel_idx);
}
int lidx = 0; // index into submodel
int bestId = 0; // submodel indedx with best Deviance from xval
int cnt = 0;
for (; lidx < lmin_max; ++lidx) { // search through submodel with same lambda and alpha values
double testDev = 0;
double testDevSq = 0;
for (int i = 0; i < cvModelBuilders.length; ++i) { // run cv for each lambda value
GLM g = (GLM) cvModelBuilders[i];
if (g._model._output._submodels[lidx] != null) {
double lambda = g._model._output._submodels[lidx].lambda_value;
g._driver.computeSubmodel(lidx, lambda, Double.NaN, Double.NaN);
testDev += g._model._output._submodels[lidx].devianceValid;
testDevSq += g._model._output._submodels[lidx].devianceValid * g._model._output._submodels[lidx].devianceValid;
}
}
double testDevAvg = testDev / cvModelBuilders.length; // average testDevAvg for fixed submodel index
double testDevSE = testDevSq - testDevAvg*testDev;
_xval_sd[lidx] = Math.sqrt(testDevSE/((cvModelBuilders.length-1)*cvModelBuilders.length));
_xval_deviances[lidx] = testDevAvg;
if(testDevAvg < bestTestDev) {
bestTestDev = testDevAvg;
bestId = lidx;
}
// early stopping - no reason to move further if we're overfitting
if(testDevAvg > bestTestDev && ++cnt == 3) {
lmin_max = lidx;
break;
}
}
for (int i = 0; i < cvModelBuilders.length; ++i) {
GLM g = (GLM) cvModelBuilders[i];
if(g._toRemove != null)
for(Key k:g._toRemove)
Keyed.remove(k);
}
_parms._lambda = Arrays.copyOf(_parms._lambda,lmin_max+1);
_xval_deviances = Arrays.copyOf(_xval_deviances, lmin_max+1);
_xval_sd = Arrays.copyOf(_xval_sd, lmin_max+1);
for (int i = 0; i < cvModelBuilders.length; ++i) {
GLM g = (GLM) cvModelBuilders[i];
g._model._output.setSubmodelIdx(bestId);
}
double bestDev = _xval_deviances[bestId];
double bestDev1se = bestDev + _xval_sd[bestId];
int bestId1se = bestId;
while(bestId1se > 0 && _xval_deviances[bestId1se-1] <= bestDev1se)
--bestId1se;
_lambdaCVEstimate = ((GLM) cvModelBuilders[0])._model. _output._submodels[bestId].lambda_value;
_bestCVSubmodel = bestId;
_model._output._lambda_1se = bestId1se; // submodel ide with bestDev+one sigma
_model._output._selected_submodel_idx = bestId; // set best submodel id here
for (int i = 0; i < cvModelBuilders.length; ++i) {
GLM g = (GLM) cvModelBuilders[i];
GLMModel gm = g._model;
gm.write_lock(_job);
gm.update(_job);
gm.unlock(_job);
}
_doInit = false;
}{noformat}

Here is where the Xval_deviances and xval_sd is calculated. However, if is only calculated when lambda_search=true. You can do something similar here for when lambda_search =false;

@exalate-issue-sync
Copy link
Author

Wendy commented: The next one is to add metrics -xval-deviance and -xval-se to scoring table. Need to add the same code from lambdaScoringHistory

{noformat}public synchronized TwoDimTable to2dTable() {

String[] cnames = new String[]{"timestamp", "duration", "iteration", "lambda", "predictors", "deviance_train"};
if(_lambdaDevTest != null)
cnames = ArrayUtils.append(cnames,"deviance_test");
if(_lambdaDevXval != null)
cnames = ArrayUtils.append(cnames,new String[]{"deviance_xval","deviance_se_xval"});
String[] ctypes = new String[]{"string", "string", "int", "string","int", "double"};
if(_lambdaDevTest != null)
ctypes = ArrayUtils.append(ctypes,"double");
if(_lambdaDevXval != null)
ctypes = ArrayUtils.append(ctypes, new String[]{"double","double"});
String[] cformats = new String[]{"%s", "%s", "%d","%s", "%d", "%.3f"};
if(_lambdaDevTest != null)
cformats = ArrayUtils.append(cformats,"%.3f");
if(_lambdaDevXval != null)
cformats = ArrayUtils.append(cformats,new String[]{"%.3f","%.3f"});
cnames = ArrayUtils.append(cnames, "alpha");
ctypes = ArrayUtils.append(ctypes, "double");
cformats = ArrayUtils.append(cformats, "%.6f");
TwoDimTable res = new TwoDimTable("Scoring History", "", new String[_lambdaIters.size()], cnames, ctypes, cformats, "");
for (int i = 0; i < _lambdaIters.size(); ++i) {
int col = 0;
res.set(i, col++, DATE_TIME_FORMATTER.print(_scoringTimes.get(i)));
res.set(i, col++, PrettyPrint.msecs(_scoringTimes.get(i) - _scoringTimes.get(0), true));
res.set(i, col++, _lambdaIters.get(i));
res.set(i, col++, lambdaFormatter.format(_lambdas.get(i)));
res.set(i, col++, _lambdaPredictors.get(i));
res.set(i, col++, _lambdaDevTrain.get(i));
if(_lambdaDevTest != null && _lambdaDevTest.size() > i)
res.set(i, col++, _lambdaDevTest.get(i));
if(_lambdaDevXval != null && _lambdaDevXval.size() > i) {
res.set(i, col++, _lambdaDevXval.get(i));
res.set(i, col++, _lambdaDevXvalSE.get(i));
}
res.set(i, col++, _alphas.get(i));
}
return res;
}{noformat}

to ScoringHistory

{noformat}public synchronized TwoDimTable to2dTable() {
String[] cnames = new String[]{"timestamp", "duration", "iterations", "negative_log_likelihood", "objective"};
String[] ctypes = new String[]{"string", "string", "int", "double", "double"};
String[] cformats = new String[]{"%s", "%s", "%d", "%.5f", "%.5f"};
TwoDimTable res = new TwoDimTable("Scoring History", "", new String[_scoringIters.size()], cnames, ctypes, cformats, "");
for (int i = 0; i < _scoringIters.size(); ++i) {
int col = 0;
res.set(i, col++, DATE_TIME_FORMATTER.print(_scoringTimes.get(i)));
res.set(i, col++, PrettyPrint.msecs(_scoringTimes.get(i) - _scoringTimes.get(0), true));
res.set(i, col++, _scoringIters.get(i));
res.set(i, col++, _likelihoods.get(i));
res.set(i, col++, _objectives.get(i));
}
return res;
}{noformat}

If you want to add deviance to it, you will have to add it this to2dTable just like the lambdScoring history

@exalate-issue-sync
Copy link
Author

Wendy commented: The last bit is the other metrics hat you are looking for. They are all there but just needed to be added. Actually, if you set score_each_iteration=true, this variable {{_model.getScoringInfo()}} will contain all the scoring info with all the needed metrics. However, I was not able to get it to work when score_each_iteration=false. Here is the code that deals with updating the scoringInfo:

{noformat} protected void updateProgress(GLMModel fixedModel, GLMModel[] randModels, Frame glmmmeReturns, Frame hvDataOnly,
double[] VC1, double[][] VC2, double sumDiff2, double convergence, boolean canScore,
double[][] cholR, Frame augXZ) {
_scoringHistory.addIterationScore(_state._iter, _state._sumEtaSquareConvergence);
if(canScore && (_parms._score_each_iteration || timeSinceLastScoring() > _scoringInterval ||
((_parms._score_iteration_interval > 0) && ((_state._iter % _parms._score_iteration_interval) == 0)))) {
_model.update(_state.expandBeta(_state.beta()), _state.ubeta(),-1, -1, _state._iter);
scoreAndUpdateModelHGLM(fixedModel, randModels, glmmmeReturns, hvDataOnly, VC1, VC2, sumDiff2, convergence,
cholR, augXZ, false);
_earlyStop = updateEarlyStop();
}
}
// update user visible progress
protected void updateProgress(boolean canScore){
assert !_parms._lambda_search;
_scoringHistory.addIterationScore(_state._iter, _state.likelihood(), _state.objective());
_job.update(_workPerIteration,_state.toString());
if(canScore && (_parms._score_each_iteration || timeSinceLastScoring() > _scoringInterval)) {
_model.update(_state.expandBeta(_state.beta()), -1, -1, _state._iter);
scoreAndUpdateModel();
_earlyStop = updateEarlyStop();
}
}
}{noformat}

If everything works then, the following will grab everything for you:

{noformat}TwoDimTable scoring_history_early_stop = ScoringInfo.createScoringHistoryTable(_model.getScoringInfo(),
(null != _parms._valid), false, _model._output.getModelCategory(), false);
_model._output._scoring_history = combineScoringHistory(_model._output._scoring_history,
scoring_history_early_stop, (_parms._lambda_search ? _lambdaSearchScoringHistory._lambdaIters : _scoringHistory._scoringIters));
{noformat}

@exalate-issue-sync
Copy link
Author

Wendy commented: [tomf|https://app.slack.com/team/UTJB8BVHR]

!https://slack-imgs.com/?c=1&o1=gu&url=https%3A%2F%2Fa.slack-edge.com%2Fproduction-standard-emoji-assets%2F13.0%2Fapple-small%2F1f415%402x.png|alt=":dog2:"!

  [21 hours ago|https://h2oai.slack.com/archives/C03HXQSLW/p1611262249047000?thread_ts=1611076467.077800&cid=C03HXQSLW]

@erin Today I had a long look in to the GLM’s code and found out that the behavior is probably correct and as expected.[@wendy|https://h2oai.slack.com/team/U0D3F7JR3] pointed out there is {{score_each_iteration}} and if that is set to {{true}} then everything works as expected.The confusing part is that the scoring history contains information for each iteration containing some metric ({{deviance}}, {{objective}}, etc) but the {{training_}} and {{validation_}} metrics are created every 15s or 20 durations of a iteration whichever takes longer and due to GLM being fast I usually end up with just one point containing non-NA {{training_}}  and {{validation_}} metrics.So I am wondering what is the best approach here. We could make SE slower by scoring each iteration (to make nice plots), or I could create new logic for G(A|L)Ms that would plot default to the normal metrics( {{training_}} ) iff there is more than N non-NA points, otherwise it would just use what it does now ({{deviance}} etc) and I could throw a warning that a nicer learning curve could be accomplished by training with {{score_each_iteration}}. The warning could be every time I would fallback to the  {{deviance}},etc or just when user specifies metric where I don’t have enough non-missing points.
More involved solution might be to try to calculate the {{training_
}} metrics when the {{deviance}}, {{likelihood}} etc are calculated which is I think during gradient calculation, depending on the type of solver etc  - I think this could be faster since I think it wouldn’t involve re-scoring the whole train set (please correct me if I am wrong [@wendy|https://h2oai.slack.com/team/U0D3F7JR3]).
(It makes me also wonder how precise is the early stopping when using different than the ones that are calculated every iteration.)Or if you have some other ideas what I could do about it I would be happy to hear them!

@exalate-issue-sync
Copy link
Author

Wendy commented: [erin|https://app.slack.com/team/U04GWNF5J]  [9 hours ago|https://h2oai.slack.com/archives/C03HXQSLW/p1611303941051500?thread_ts=1611076467.077800&cid=C03HXQSLW]

hey [@TomF|https://h2oai.slack.com/team/UTJB8BVHR] thanks for the update! [@TomF|https://h2oai.slack.com/team/UTJB8BVHR] [@wendy|https://h2oai.slack.com/team/U0D3F7JR3] i have some questions:i was noticing that GLM has the early stopping args (e.g. {{stopping_rounds, stopping_metric, stopping_tolerance}}) — are those actually scoring based on other metrics like AUC (for example), and if so, then are we just throwing away that data and not storing it in the scoring history?  (the missing GLM scoring history metrics: AUC, logloss, etc).For the SE - GLM metalearner in AutoML, maybe we can just use the {{stopping_rounds}}
arg to get more regular (and not too computationally demanding) scoring history?

@exalate-issue-sync
Copy link
Author

Wendy commented: [tomf|https://app.slack.com/team/UTJB8BVHR]

!https://slack-imgs.com/?c=1&o1=gu&url=https%3A%2F%2Fa.slack-edge.com%2Fproduction-standard-emoji-assets%2F13.0%2Fapple-small%2F1f415%402x.png|alt=":dog2:"!

  [4 hours ago|https://h2oai.slack.com/archives/C03HXQSLW/p1611322298051700?thread_ts=1611076467.077800&cid=C03HXQSLW]

@erin It looks to me like that even if you set {{stopping_metric="RMSE", stopping_rounds > 0}}  and not set the {{score_each_iteration=true}} then you RMSE won’t be calculated every iteration but when it gets calculated it shows in the scoring history[1] (however this is limited by {{max(15s, 20*duration_of_a_iteration)}} so there is not so many entries since GLM is quite fast - on {{airlines_all.05p.csv}} (5M rows) I get like 2 or 3 evaluations of the “ordinary” metrics on my macbook which does not seem to be very [good.In|http://good.In] AutoML we use the “AUTO” metalearner which has non_negativity constraint doesn’t support early-stopping  ([https://h2oai.atlassian.net/browse/PUBDEV-4641|https://h2oai.atlassian.net/browse/PUBDEV-4641|smart-link] ):

{noformat}hex/glm/GLM.java:672: _parms._early_stopping = false; // PUBDEV-4641: early stopping does not work correctly with non-negative option{noformat}

@wendy Sure, I’d be happy to help fixing some bugs but it will probably still take a bit more staring into the code for me to be comfortable with it as I still barely understand how is the GLM implemented.Even the {{score_each_iteration}} doesn’t score every iteration:When I don’t specify {{lambda_search}}, {{alpha}}, and {{lambda_}} and set {{score_each_iteration=true}} it works well as expected but when I specify both {{alpha}} and {{lambda}} (see below) just about 12% of the {{training_rmse}} is filled in. When I comment out the {{lambda_}}  I get around 17% of the {{training_rmse}} filled in but when I don’t comment out {{lambda_}}  and I comment out {{alpha}} then I get all the {{training_rmse}} filled in. So I assume the problem is with “alpha search”.

{noformat}glm = H2OGeneralizedLinearEstimator(score_each_iteration=True,
lambda_=[0, 0.5, 1],
alpha=[0, 0.5, 1],
solver="lbfgs", # I specified some solver so I have iterative training (instead of the analytical)
stopping_metric="rmse",
stopping_rounds=3)
glm.train(y="ArrTime", training_frame=train){noformat}

Also I noticed that {{lambda_search}} is not supported with {{stopping_metric}} and {{stopping_rounds}} - it has its own early stopping mechanism.I was thinking that a quick fix for the early stopping could be turning on {{score_each_iteration}}  when the early stopping is turned on but as I mentioned above it would probably not help the cases with specified {{alpha}} parameter.

@exalate-issue-sync
Copy link
Author

Wendy commented: [~accountid:5e43370f5a495e0c91a74ebe] The default scoring_interval for GLM is 5. Hence, if you do not specify how often you want scoring to be done, that will be the frequency of the scoring in your scoring history generation.

@exalate-issue-sync
Copy link
Author

Tomas Fryda commented: [~accountid:557058:1f01b471-f37b-40af-bae9-a18b38e24549] Are you sure about that? As far as I can tell the default scoring interval is {{max(15s, 20*iteration_time)}} [1,2]. Is there a way to set the scoring interval to some other than default value? I know only about {{score_each_iteration}}. Also as mentioned in previous post {{score_each_iteration}} doesn’t score each iteration when both alpha and lambda are specified.

[1] [https://github.com/h2oai/h2o-3/blob/3a8bd05e7ba8373b46a7825dd0a828e4255fc9d8/h2o-algos/src/main/java/hex/glm/GLM.java#L2150|https://github.com/h2oai/h2o-3/blob/3a8bd05e7ba8373b46a7825dd0a828e4255fc9d8/h2o-algos/src/main/java/hex/glm/GLM.java#L2150]
[2] [https://github.com/h2oai/h2o-3/blob/3a8bd05e7ba8373b46a7825dd0a828e4255fc9d8/h2o-algos/src/main/java/hex/glm/GLM.java#L2509|https://github.com/h2oai/h2o-3/blob/3a8bd05e7ba8373b46a7825dd0a828e4255fc9d8/h2o-algos/src/main/java/hex/glm/GLM.java#L2509]

@exalate-issue-sync
Copy link
Author

Wendy commented: [~accountid:5e43370f5a495e0c91a74ebe] I have added a new parameter for you generate_scoring_history which will force the algorithm to generate scoring history when it is enabled whether lambda_search is turned on or not. To specify special scoring iteration interval, you need to set {{score_iteration_interval}} to whatever number you like. When you do not specify score_iteration_interval, the scoring interval will be specified max(15s, 20*iteration_time). However, when you do specify score_each_interval or score_iteration_interval, then, the following condition is checked to decide whether scoring should occur or not:

!image-20210219-174147.png|width=983,height=271!

Check the scoring condition in the attachment.

@exalate-issue-sync
Copy link
Author

Tomas Fryda commented: Thank you [~accountid:557058:1f01b471-f37b-40af-bae9-a18b38e24549] ! I noticed that the scoring condition seem to be different for different variants and in some cases it doesn’t include the {{score_iteration_interval}} . Maybe you know about this but just in case you don't [1], [2]. Are those two cases fixed by your {{generate_scoring_history}}?

[1] [https://github.com/h2oai/h2o-3/blob/3a8bd05e7ba8373b46a7825dd0a828e4255fc9d8/h2o-algos/src/main/java/hex/glm/GLM.java#L2367|https://github.com/h2oai/h2o-3/blob/3a8bd05e7ba8373b46a7825dd0a828e4255fc9d8/h2o-algos/src/main/java/hex/glm/GLM.java#L2367]
[2] [https://github.com/h2oai/h2o-3/blob/3a8bd05e7ba8373b46a7825dd0a828e4255fc9d8/h2o-algos/src/main/java/hex/glm/GLM.java#L2528|https://github.com/h2oai/h2o-3/blob/3a8bd05e7ba8373b46a7825dd0a828e4255fc9d8/h2o-algos/src/main/java/hex/glm/GLM.java#L2528]

@exalate-issue-sync
Copy link
Author

Wendy commented: [~accountid:5e43370f5a495e0c91a74ebe] I absolutely did not see those two conditions. Thank you for bringing them to my attention. Yes, I intend to fix all your metrics problems with this JIRA.

@exalate-issue-sync
Copy link
Author

Wendy commented: [~accountid:5e43370f5a495e0c91a74ebe] For multinomial with no lambda search and no cross-validation, this is the scoring history I have for you. Is it sufficient?

!image-20210224-184652.png|width=1297,height=810!

@exalate-issue-sync
Copy link
Author

Wendy commented: I have added performance benchmark to make sure what I add did not affect the GLM training time. Here is a summary of the performance comparison:

!image-20210313-000335.png|width=533,height=274!

@h2o-ops
Copy link
Collaborator

h2o-ops commented May 14, 2023

JIRA Issue Migration Info

Jira Issue: PUBDEV-7968
Assignee: Wendy
Reporter: Erin LeDell
State: Resolved
Fix Version: 3.32.1.1
Attachments: Available (Count: 7)
Development PRs: Available

Linked PRs from JIRA

#5351

Attachments From Jira

Attachment Name: image-20210119-212038.png
Attached By: Wendy
File Link:https://h2o-3-jira-github-migration.s3.amazonaws.com/PUBDEV-7968/image-20210119-212038.png

Attachment Name: image-20210119-212110.png
Attached By: Wendy
File Link:https://h2o-3-jira-github-migration.s3.amazonaws.com/PUBDEV-7968/image-20210119-212110.png

Attachment Name: image-20210219-174147.png
Attached By: Wendy
File Link:https://h2o-3-jira-github-migration.s3.amazonaws.com/PUBDEV-7968/image-20210219-174147.png

Attachment Name: image-20210224-184652.png
Attached By: Wendy
File Link:https://h2o-3-jira-github-migration.s3.amazonaws.com/PUBDEV-7968/image-20210224-184652.png

Attachment Name: image-20210313-000335.png
Attached By: Wendy
File Link:https://h2o-3-jira-github-migration.s3.amazonaws.com/PUBDEV-7968/image-20210313-000335.png

Attachment Name: Screen Shot 2021-01-18 at 3.05.36 PM.png
Attached By: Erin LeDell
File Link:https://h2o-3-jira-github-migration.s3.amazonaws.com/PUBDEV-7968/Screen Shot 2021-01-18 at 3.05.36 PM.png

Attachment Name: Screen Shot 2021-01-18 at 3.26.15 PM.png
Attached By: Erin LeDell
File Link:https://h2o-3-jira-github-migration.s3.amazonaws.com/PUBDEV-7968/Screen Shot 2021-01-18 at 3.26.15 PM.png

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant