Backend for paraphrasing tool leveraging LLM APIs.
Implemented as an AWS Lambda application and utilizes Serverless Framework for deployment.
-
POST /paraphrase
- available providers: "chatgpt", "gemini"
- available tones: "formal", "amicable", "fun", "casual", "sympathetic", "persuasive"
- sample request:
{ "provider": "chatgpt", "tone": "formal", "text": "I'm hungry. What's for dinner?" }
- sample response:
{ "result": "I am currently experiencing hunger. May I inquire about the menu for this evening's meal?" }
-
GET /providers
- sample response:
{ "providers": [ "chatgpt", "gemini" ] }
- sample response:
-
GET /tones
- sample response:
{ "tones": [ "formal", "amicable", "fun", "casual", "sympathetic", "persuasive" ] }
- sample response:
$ make .env
- see generated
.env
file for configuration
$ make deps
$ make test
$ make testIncludeInt
$ make build
- this generates
bin
directory to be used in deployment
$ make deploy
$ make fmt
generate test mocks (to be used with stretchr/testify) for all interfaces in project
$ make mocks
- can be configured in
.mockery.yaml