New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AiLab] Model Manager Dialog and autogenerated design elements and code #38664
Conversation
@madelynkasula I haven't addressed your comments from #38492 yet, so feel free to hold off on reviewing this or only look at the new Model Manager Dialog bits. Thank you! |
import Button from '@cdo/apps/templates/Button'; | ||
import Applab from '@cdo/apps/applab/applab'; | ||
|
||
export default class ModelManagerDialog extends React.Component { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this dialog will live next to and be most similar to these two components, so just linking those here in case it's helpful:
<LibraryManagerDialog/>
<AssetManager/>
The LibraryManagerDialog is newer and more consistent with our newer styles, so i'd go with that one if you're trying to decide how to style something (related to our convo here)
<Button | ||
text={'Import'} | ||
color={Button.ButtonColor.orange} | ||
onClick={this.importMLModel} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this function does most of its work asynchronously, should we show a spinner or something similar while it does its work?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Screen.Recording.2021-02-01.at.9.29.13.PM.mov
@@ -19,6 +19,7 @@ import { | |||
} from '../lib/util/javascriptMode'; | |||
import {commands as audioCommands} from '@cdo/apps/lib/util/audioApi'; | |||
import {commands as timeoutCommands} from '@cdo/apps/lib/util/timeoutApi'; | |||
import {commands as mlCommands} from '@cdo/apps/lib/util/mlApi'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see a lot of "ml" related naming, but it sounds like we are going to trend towards "ai" instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah - it'd be good to get all on the same page about that and update the code to match. Ok to do as a rename/refactor after this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds fine.
}; | ||
|
||
componentDidUpdate() { | ||
this.getModelList(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When running a regular AppLab app, we keep on hitting /api/v1/ml_models/names
. This component seems to be updating more than expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
componentDidMount()
instead of componentDidUpdate()
🤦♀️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems to fire once, but due to the lifetime management of these dialogs (they live forever, just showing and hiding), it doesn't refresh the model list each time the dialog is opened. I have implemented what seems to be a solution in #39041.
apps/src/applab/ai.js
Outdated
var predictOnClick = `onEvent("${predictButtonId}", "click", function() { | ||
getPrediction("${ | ||
modelData.name | ||
}", "${modelId}", testValues, function(value) { | ||
setText("${predictionId}", value); | ||
}); | ||
});`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to expose errors in prediction to the user writing an app? (Both so that they can debug, and also so that they can give information back to the user running their app.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems like a good approach for now.
return resolve(); | ||
}) | ||
.fail((jqXhr, status) => { | ||
return reject({message: 'An error occurred'}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it does seem we might want to expose this error to the user's program, so they can then report it back to their end-user.
One thing we did observe: currently a remixed project won't be able to access another user's saved trained model. It sounds like we want to relax that restriction, but we need to ensure this approach is appropriate. |
… not having a current_user when fetching model names
I wonder if we should switch to the Design view immediately following an import so that the user can see the new design elements that were just added? From using this a bit today, I saw the new code right away, but I was wondering where the new elements were, before realising they wouldn't show on the main view until the program was run. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Obviously more things to do later, but since this is hidden behind an experiment, seems good to move forward.
Yeah, that makes sense. I'll address in a follow up. |
Pivot from the now-closed #38492.
Users will be able to see all of their trained models in the new "Model Manager Dialog" which can be accessed from the settings cog of the the App Lab toolbox. From there, they can click to import the model which will generate form input design elements to collect test values and related starter code to get a prediction back from the ML model.