Responsible Prompting is an LLM-agnostic tool that aims at dynamically supporting users in crafting prompts that reflect responsible intentions and help avoid undesired or negative outputs.
This API is composed by a Flask server that hosts the recommend route, the swagger files and a responsible prompting demo.
You can run the server locally to execute requests and obtain responsible prompting recommendations according to swagger description.
This short tutorial assumes that you have:
- A machine with python 3.9 installed
- A Hugging Face access token: https://huggingface.co/docs/hub/en/security-tokens
- In your terminal, clone this repository and
cdintoresponsible-prompting-apifolder - Create a virtual environment with
python -m venv <name-of-your-venv> - Activate your virtual environment with
source <name-of-your-venv>/bin/activate - Execute
pip install -r requirements.txtto install project requirements - Generate a Hugging Face access token: https://huggingface.co/docs/hub/en/security-tokens
- In the
.envfile, replace<include-token-here>with your hugging face access token:
HF_TOKEN= <include-token-here>
- Execute
python app.py - Check if the message
* Serving Flask app 'app'appears and you are good to go! - In your browser, access http://127.0.0.1:8080/ and you will see the message 'Ready!'
- If not, generate a Hugging Face access token: https://huggingface.co/docs/hub/en/security-tokens
- In line 140 of index.html, replace
<include-token-here>with your hugging face access token:
headers: {"Authorization": "Bearer <include-token-here>"}
- Run the server (if it is not already running)
- In your browser, access: http://127.0.0.1:8080/static/demo/index.html
Note
In case you wish to make requests to other APIs, you can change the $ajax call in line 138 of index.html. Remember to also make sure that the json data (in line 142) follows the specifications of the LLM being used.
- Run the server (if it is not already running)
- In your browser, access: http://127.0.0.1:8080/swagger
In swagger, you can test the API and understand how to make requests.
- Run the server (if it is not already running)
- In your browser, access: http://127.0.0.1:8080/recommend and pass your parameters, here are request examples with
the prompt:
Act as a data scientist with 8 years of experience. Provide suggestions of what to do to make the data science project more inclusive.
Just copy and paste this in your terminal (make sure you have curl installed):
curl -X 'GET' \
'http://127.0.0.1:8080/recommend?prompt=Act%20as%20a%20data%20scientist%20with%208%20years%20of%20experience.%20Provide%20suggestions%20of%20what%20to%20do%20to%20make%20the%20data%20science%20project%20more%20inclusive.' \
-H 'accept: */*' \
-H 'add_lower_threshold: 0.3' \
-H 'add_upper_threshold: 0.5' \
-H 'remove_lower_threshold: 0.3' \
-H 'remove_upper_threshold: 0.5' \
-H 'model_id: sentence-transformers/all-minilm-l6-v2'
Just copy and paste this in your browser:
http://127.0.0.1:8080/recommend?prompt=Act as a data scientist with 8 years of experience. Provide suggestions of what to do to make the data science project more inclusive.
The response should look like this:
{
"add": [
{
"prompt": "What participatory methods might I use to gain a deeper understanding of the context and nuances of the data they are working with?",
"similarity": 0.49432045646542666,
"value": "participation"
},
{
"prompt": "Be inclusive of individuals with non-traditional backgrounds and experiences in your response.",
"similarity": 0.48868733465585423,
"value": "inclusion and diversity"
},
{
"prompt": "Can you suggest some techniques to handle missing data in this dataset?",
"similarity": 0.47995963514385853,
"value": "progress"
},
{
"prompt": "How do I make this dataset compatible with our analysis tools?",
"similarity": 0.47405629104549163,
"value": "transformation"
},
{
"prompt": "Consider the potential impact of the data, question, or instruction on individuals and society as a whole.",
"similarity": 0.4739456017558868,
"value": "awareness"
}
],
"remove": []
}This is the current structure of the repository files:
.
├── CHANGELOG.md
├── CODE_OF_CONDUCT.md
├── CONTRIBUTING.md
├── Dockerfile
├── LICENSE
├── MAINTAINERS.md
├── README.md
├── SECURITY.md
├── app.py
├── config.py (swagger config file)
├── control
│ └── recommendation_handler.py
├── helpers
│ └── get_credentials.py
├── prompt-sentences-main
│ ├── README.md
│ ├── prompt_sentences-all-minilm-l6-v2.json
│ ├── prompt_sentences-bge-large-en-v1.5.json
│ ├── prompt_sentences-multilingual-e5-large.json
│ ├── prompt_sentences-slate-125m-english-rtrvr.json
│ ├── prompt_sentences-slate-30m-english-rtrvr.json
│ ├── prompt_sentences.json
│ ├── sentences_by_values-all-minilm-l6-v2.png
│ ├── sentences_by_values-bge-large-en-v1.5.png
│ ├── sentences_by_values-multilingual-e5-large.png
│ ├── sentences_by_values-slate-125m-english-rtrvr.png
│ └── sentences_by_values-slate-30m-english-rtrvr.png
├── requirements.txt
└── static
├── demo
│ ├── index.html
│ └── js
│ └── jquery-3.7.1.min.js
└── swagger.json
If you have any questions or issues you can create a new issue here.
Pull requests are very welcome! Make sure your patches are well tested. Ideally create a topic branch for every separate change you make. For example:
- Fork the repo
- Create your feature branch (
git checkout -b my-new-feature) - Commit your changes (
git commit -am 'Added some feature') - Push to the branch (
git push origin my-new-feature) - Create new Pull Request
If you would like to see the detailed LICENSE click here.
- Author: Vagner Santana vsantana@ibm.com
- Author: Melina Alberio
- Author: Cássia Sanctos csamp@ibm.com
- Author: Tiago Machado Tiago.Machado@ibm.com