Skip to content

Commit

Permalink
updated
Browse files Browse the repository at this point in the history
  • Loading branch information
revanth7667 committed Apr 23, 2024
1 parent 9b4d521 commit 6df47c4
Show file tree
Hide file tree
Showing 2 changed files with 84 additions and 2 deletions.
36 changes: 36 additions & 0 deletions .github/workflows/cicd.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
name: CI/CD

on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
workflow_dispatch:

jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Test
run: make test
- name : Format
run : make format
- name: Lint
run: make lint
- name: Archive and Upload Artifacts
uses: actions/upload-artifact@v3
with:
name: ml_pipeline-artifacts
path: ${{ github.workspace }}

- name: Save to repository
env:
GH_TOKEN: ${{ secrets.GH_TOKEN }}
run: |
echo GH_TOKEN: "${GH_TOKEN}"
git config --global user.name 'github-actions'
git config --local user.email "action@github.com"
git add rust_micro/src/main.rs
git commit -m "CICD performed" || echo "ignore commit failure, proceed"
git push
50 changes: 48 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,48 @@
# rust_template
Base Template For Rust Projects
# Rust Serverless Transformer

## Overview

In this project we will deploy a simple rust fuction which uses a LLMs from huggingface to continue the text given to it. The function will be deployed as a lambda function on AWS.

## Prerequisites
Make sure you have the following installed:
- [Docker](https://docs.docker.com/get-docker/)
- [Rust](https://www.rust-lang.org/tools/install)
- [Cargo Lambda](https://www.cargo-lambda.info/guide/installation.html)
- [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html)

## App Development

1. Create a new rust project using cargo lambda
```bash
cargo lambda new <project-name>
```

2. Use the desired model from huggingface, in this case, we used a [rustformers/pythia-ggml](https://huggingface.co/rustformers/pythia-ggml) model.
don't forget to add the correct path to the model in the file

3. Build the rust code as required, in this case we take in the query from the user and complete it using the model. In case there is no query, we use a default query to start the completion.

4. Test the code locally using the following command
```bash
cargo lambda watch
```

5. We can use services like postman to send and receive the requests from the lambda function.

6. Once the functionality is working as expected, make the production build

7. use Docker to dockerize the function along with the model file

## App deployment

1. Upload the docker image to AWS ECR
2. Go to AWS lambda and create a new function
3. select the container image as the source
4. provide the ECR image URI
5. Set the memory and timeout as required (suggested to set higher since the model takes time to load)
6. Create the function


## GitHub Actions
Github Actions is used to automatically perform functions like Linting, formatting etc. whenever there is an update in the repository.

0 comments on commit 6df47c4

Please sign in to comment.