llm-coder is a web application that leverages a Large Language Model (LLM) such as OpenAI's GPT-3.5 Turbo to generate code snippets based on user inputs.
- Docker
- OpenAI API Key
-
Clone the repository:
git clone https://github.com/nooraldeenkai/llm-coder.git
-
Navigate to the project directory:
cd llm-coder -
Create a
.envfile in the root directory and add your OpenAI API key:OPENAI_API_KEY=your_api_key_here -
Build the Docker image:
docker build -t llm-coder . -
Run the Docker container:
docker run -d -p 8003:8003 llm-coder
-
Access the application at http://localhost:8003/static/index.html in your web browser.
-
Alternatively, you can access the Swagger documentation at http://localhost:8003/docs for API endpoints.
/get_answer: POST endpoint to send a prompt to the language model and get a code snippet response./feedback: POST endpoint to submit feedback on the generated code snippets.
- You can customize the application by modifying the FastAPI endpoints and the logic in the
main.pyfile to suit your specific use case. - Adjust the Dockerfile and Docker container settings as needed for your deployment environment.
Contributions are welcome! Please feel free to submit pull requests or open issues for any bug fixes, feature requests, or improvements.
This project is licensed under the MIT License.