Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add host commandline argument for UI and sklearn server #27

Merged
merged 3 commits into from
Jun 16, 2018

Conversation

mdagost
Copy link
Contributor

@mdagost mdagost commented Jun 8, 2018

The UI and the model server run on 127.0.0.1 by default, but this doesn't work from within a docker container. This PR introduces a -host commandline flag so that you can pass in 0.0.0.0 to listen on all interfaces.

To verify that this works, you can do the following:

git clone https://github.com/databricks/mlflow.git
cd mlflow
docker build -t mlfow .
docker run -it -p 5000:5000 mlflow

Then from within the container:

mlflow ui

And go to http://127.0.0.1:5000 and notice that you don't see the UI. Then from within the container instead do:

mlflow ui --host=0.0.0.0

and visit http://0.0.0.0:5000

To see the model serving stuff, go into the container and do

python example/quickstart/test_sklearn.py

and note the model id. Then do

mlflow sklearn serve -r <MODEL_ID> model

and from a separate terminal try (but fail) to get a prediction:

curl -d '[{"x": 1}, {"x": -1}]' -H 'Content-Type: application/json'  -X POST localhost:5000/invocations

Instead try

 mlflow sklearn serve -r 03947ea706cd474aa4d2395db7d9d6ff --host=0.0.0.0 model
curl -d '[{"x": 1}, {"x": -1}]' -H 'Content-Type: application/json'  -X POST 0.0.0.0:5000/invocations

@aarondav
Copy link
Contributor

Thanks for the contribution! This looks great. Would you mind adding a PR description to give some context about why this change is made, including the steps you took to verify that this works as expected?

mlflow/cli.py Outdated
@@ -124,12 +124,14 @@ def run(uri, entry_point, version, param_list, experiment_id, mode, cluster_spec
@click.option("--file-store-path", default=None,
help="The root of the backing file store for experiment and run data. Defaults to %s."
% file_store._default_root_dir())
def ui(file_store_path):
@click.option("--host", default="127.0.0.1",
help="The networking interface on which the UI server listens. Defaults to 127.0.0.1. Use 0.0.0.0 for docker.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this line (and the associated one) has a linter error due to line length. In general you can run bash lint.sh locally to run the linter.

Maybe we could change the help a bit while we're at it (mostly to clarify the implication of the default and of setting 0.0.0.0):

@click.option("--host", default="127.0.0.1",
              help="The IP address which the UI server listens."
                   " Defaults to 127.0.0.1, allowing only local connections."
                   " Use 0.0.0.0 to bind to all addresses.")

@@ -76,7 +76,9 @@ def commands():
@click.argument("model_path")
@click.option("--run_id", "-r", metavar="RUN_ID", help="Run ID to look for the model in.")
@click.option("--port", "-p", default=5000, help="Server port. [default: 5000]")
def serve_model(model_path, run_id=None, port=None):
@click.option("--host", default="127.0.0.1",
help="The networking interface on which the UI server listens. Defaults to 127.0.0.1. Use 0.0.0.0 for docker.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe here instead of "UI server" we can say "prediction server".

@mdagost
Copy link
Contributor Author

mdagost commented Jun 13, 2018

@aarondav I added a PR description and addressed your help text (and linting) comments.

@codecov-io
Copy link

Codecov Report

Merging #27 into master will increase coverage by 0.1%.
The diff coverage is 40%.

Impacted file tree graph

@@            Coverage Diff            @@
##           master      #27     +/-   ##
=========================================
+ Coverage    69.6%   69.71%   +0.1%     
=========================================
  Files          44       44             
  Lines        2527     2536      +9     
=========================================
+ Hits         1759     1768      +9     
  Misses        768      768
Impacted Files Coverage Δ
mlflow/cli.py 0% <0%> (ø) ⬆️
mlflow/sklearn.py 78.68% <66.66%> (+0.35%) ⬆️
mlflow/store/file_store.py 90.95% <0%> (+0.08%) ⬆️
mlflow/projects.py 54.22% <0%> (+0.2%) ⬆️
mlflow/tracking/__init__.py 82.22% <0%> (+0.4%) ⬆️
mlflow/version.py 100% <0%> (+100%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 13bbd76...138b55e. Read the comment docs.

@aarondav
Copy link
Contributor

LGTM - merging!

@aarondav aarondav merged commit 1bb2a15 into mlflow:master Jun 16, 2018
javierluraschi added a commit to rstudio/mlflow that referenced this pull request Aug 25, 2018
Support for running R and Python travis tests
juntai-zheng pushed a commit that referenced this pull request Dec 19, 2019
* news post on website

* note formatting

* cran pkg is up
jdlesage pushed a commit to jdlesage/mlflow that referenced this pull request Dec 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants