Skip to content

This repository powers a Streamlit app for classifying text with respect to 16 of United Nations Sustainable Development Goals (SDG)..

License

Notifications You must be signed in to change notification settings

sadickam/sdg-classification-bert

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

61 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

sdg-classification-bert (sdgBERT App)

This repository powers a Streamlit app for classifying text with respect the United Nations Sustainable Development Goals (SDG). The classification model is a fine-tuned BERT and named sdgBERT. The labelled data used in fine-tuning sdgBERT model was obtained fron the OSDG Community Dataset publicly available at https://zenodo.org/record/5550238#.Y93vry9ByF4. The OSDG dataset include text from diverse fields; hence, the fine tuned BERT model and the streamlit app are generic and can be used to predict the SDG of most texts.

The streamlit app supports SDG 1 to SDG 16 shown in the image below image Source:https://www.un.org/development/desa/disabilities/about-us/sustainable-development-goals-sdgs-and-disability.html

App link and key functions

The app can be accessed from:

The app has the following key functions:

  • Single text prediction: copy/paste or type in a text box
  • Multiple text prediction: upload a csv file (Note: The column contaning the texts to be predicted must be title "text_inputs". The app will generate an output csv file that you can download. This downloadable file will include all the original columns in the uploaded cvs, a column for predicted SDGs, and a columns prediction probability scores. If any of the text in text_inputs is longer that the maximum model sequence length of approximately 300 - 400 words (i.e. 512 word pieces), it will be automatically trancated. For now, if you want to analyse large documents using this model or streamlit app, I will recommend breaking the document into 300 to 400 word chunks, have each chunk in a cell in the "text_inputs" column of your cvs file. Hence, you can analyse large document page by page, where the text on each page will be in a csv cell.

In future updates of the app, support for directly analysing pdf documents may be added for ease of analysing large documents.

Use fine tuned BERT Transformer model directly

If you would like to directly use the fine tuned BERT model, you can easily achieve that unsing the code below:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("sadickam/sdgBERT")

model = AutoModelForSequenceClassification.from_pretrained("sadickam/sdgBERT")

Or just clone the model repo from Hugging Face using the code below:

git lfs install
git clone https://huggingface.co/sadickam/sdg-classification-bert

# if you want to clone without large files – just their pointers
# prepend your git clone with the following env var:
GIT_LFS_SKIP_SMUDGE=1

OSDG online tool

The OSDG has an online tool for SDG clsssification of text. I will encourage you to check it out at https://www.osdg.ai/ or visit their github page at https://github.com/osdg-ai/osdg-data to learm more about their tool.

To do

  • Add model evaluation metrics
  • Citation information

About

This repository powers a Streamlit app for classifying text with respect to 16 of United Nations Sustainable Development Goals (SDG)..

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages