Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to deploy tldrstory to serverless environment? #5

Closed
FriendlyUser opened this issue Dec 16, 2020 · 10 comments
Closed

How to deploy tldrstory to serverless environment? #5

FriendlyUser opened this issue Dec 16, 2020 · 10 comments

Comments

@FriendlyUser
Copy link

Hi, I was wondering if there was any recommended way to deploy to a free tier cloud environment such as heroku or google cloud platform.

I tried my best, but the combination of running the api backend and trying to dockerize tldrstory has me stumped.

I'm guessing the recommended way is to have a virtual machine with a GPU.

@davidmezzetti
Copy link
Member

Thank you for giving tldrstory a try, sorry to hear you're having issues.

You'll need at least 8 GB of RAM to run the full stack. You shouldn't need a GPU though it would make some things faster.

If you share the issues you're running into, I may be able to help.

@FriendlyUser
Copy link
Author

I am having trouble running the api backend in docker.

My original plan was to use share.streamilit for the front-end tldrstory and google cloud run for the backend fastapi, was getting an issue that said config wasn't available.

image

Could be related to my docker image.

FROM python:3.6.12

WORKDIR /usr/src/app

# Copy the source from the current directory to the Working Directory inside the container
COPY . .

RUN python -m pip install -r requirements.txt
RUN python -m pip install tldrstory
# RUN python -m pip install git+https://github.com/neuml/tldrstory
RUN python -m tldrstory.index index.yml || true
RUN echo "hello world"
# command to run backend fast api
# CMD ["/bin/sh", "run.sh"]

Also moved the files to the root and updated the paths according to reference app.yml instead of sports/app.yml

@davidmezzetti
Copy link
Member

I think I know what's going on. I don't see it in this docker image but wherever you're starting fastapi, change the environment variable from INDEX_SETTINGS to CONFIG.

The documentation on GitHub is out of date due to a recent txtai change.

@davidmezzetti
Copy link
Member

The documentation has now been updated. You'll want a separate yml file for the api with just the path to the index. The updated docs show this.

@davidmezzetti
Copy link
Member

Closing this issue due to inactivity. Please re-open or open a new issue if problems continue to persist.

@csheargm
Copy link

csheargm commented Mar 12, 2021 via email

@davidmezzetti
Copy link
Member

INDEX_SETTINGS is from txtai and that was removed in txtai 1.5.0.

I would go to your python site-packages directory and view the txtai/api.py file to see.

A more brute force approach would be to run a grep on your file system to see where it's coming from

find / -type f | grep py$ | xargs grep INDEX_SETTINGS

The 2.0 version should look like this:

https://github.com/neuml/txtai/blob/v2.0.0/src/python/txtai/api.py#L345

@csheargm
Copy link

csheargm commented Mar 12, 2021 via email

@davidmezzetti
Copy link
Member

Glad you got it working!

The first issue now makes sense looking at your prior message. Notice how the install was run with sudo, which would install system wide. The previous install must have been as a user. So I wouldn't consider any action on this as this is just a difference of how the package was installed.

I suspect I have an older version of tldstory which expects "INDEX_SETTINGS" environment variable. So I did a sudo pip3 install tldrstory -U

I'll create an issue to update the README for 2 + 3. Thank you for the feedback on this!

@csheargm
Copy link

csheargm commented Mar 12, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants