title | colorFrom | colorTo | sdk | app_port | emoji | pinned | license | app_file |
---|---|---|---|---|---|---|---|---|
ViT: Image Classifier |
indigo |
indigo |
gradio |
7860 |
🔥 |
false |
mit |
app.py |
This workshop was developed for the "Intro to Artificial Intelligence" course at UiT: The Arctic University of Norway in collaboration with Sopra Steria.
In this workshop, you will get hands on experience with:
- Cloning and pushing code from/to GitHub.
- Loading and running a pretrained image classification model from Transformers.
- Developing a simple web application to enable users to test a pretrained model using Gradio.
- Making a public web app anyone can access using Hugging Face Spaces.
- Automizing tasks using GitHub Actions.
- Explainable AI of Vision Transformers using transformers-interpret.
And of course, all of this completely FOR FREE!
Some truely amazing saliency maps were submitted from the students. The submitted images are made available in Releases.
- André Pedersen, Apps, Sopra Steria
- Tor-Arne Schmidt Nordmo, IFI, UiT: The Arctic University of Norway
-
Make your first GitHub account by going here and signing up (see top right of website).
-
After logging in, make a copy of the repository by making a fork (click the
fork
button, choose your user asowner
and clickcreate fork
). -
Now you are ready to clone your own fork to a laptop by opening a terminal and running (remember to replace
<username>
with your own GitHub user name):
git clone https://github.com/<username>/INF1600-ai-workshop.git
Move into the new directory:
cd INF-1600-ai-workshop
- After cloning, go inside the repository, and from the terminal run these lines to create a virtual environment and activate it:
python3 -m venv venv/
source venv/bin/activate
On Windows, to activate the virtual environment, run ./venv/Scripts/activate
instead of the source
command.
- Install dependencies to the virtual environment by:
pip install -r requirements.txt
- To test if everything is working, try to run the following command to launch the web server:
python app.py
-
You can then access the web app by going to http://127.0.0.1:7860 in your favourite web browser.
-
From the prompted website, try clicking one of the image examples and clicking the orange
Submit
button. The model results should show on the right after a few seconds. -
Try accessing this address from your mobile phone.
-
This should not work, to access the app from a different device, you need to serve it. Try setting
share=True
in theinterface.launch()
call in theapp.py
script. When runningapp.py
now, you should be given a different web address. Try using that one instead on your mobile device.
But of course, hosting the app yourself from your laptop is not ideal. What if there was some alternative way to do this without using your own device completely for free...
-
Click here to go to the Hugging Face sign up page and make an account.
-
After making an account and logging in, click the
+ New
button on the left of the website and chooseSpace
from the dropdown. -
In the
Create a new Space
tab, choose aSpace name
for the app, choose a License (preferablyMIT
), among theSpace SDKs
chooseGradio
, and finally, clickCreate Space
.
We are now given the option to manually add the relevant files, but that is boring... Let's instead try to setup a robot that does that for us!
-
From the Hugging Face website here, click on your user badge (top right), and from the dropdown click
Settings
. On the left hand side ofSettings
site, clickAccess Tokens
, and then clickNew Token
. Set the nameHF_TOKEN
, set permissions towrite
, and clickGenerate a token
. -
Then you need to make the same token available in your GitHub fork. Go to your fork, go the the repo
Settings > Secrets and variables > Actions
and click the greenNew repository secret
. SetHF_TOKEN
asname
and copy the TOKEN you created previously on Hugging Face by going toHugging Face > Settings > Access Tokens > Select token > Click show
. -
On your laptop, open the file located at
.github/workflows/deploy.yml
, and at the last line, replace theandreped
andandreped/ViT-ImageClassifier
phrases with your own Hugging Face user and space name. -
Setup communication with GitHub and Hugging Face by running the follow in the terminal (replace
andreped/ViT-ImageClassifier
like in step 16):
git remote add space https://huggingface.co/spaces/andreped/ViT-ImageClassifier
- Then push the code to HuggingFace by running to enable syncronization (only needed to do once):
git push --force space main
The first time, you will be promoted to give your username and password. When giving the password, you need to give the HF_TOKEN
you defined earlier.
Go to Settings > Access Tokens > Select token > Click show
.
- Then push the code to GitHub by:
git add .
git commit -m "Some changes"
git push
-
Now go to your GitHub fork (e.g.,
https://github.com/<username>/INF1600-ai-workshop/
) and verify that the code is there. -
Then click the
Actions
tab to see running workflows. Verify that the workflow ran successfully by clicking the current run and checking workflow status. -
Finally, we can then head over to our Hugging Face space and check if everything is working. My own app is hosted at https://huggingface.co/spaces/andreped/ViT-ImageClassifier.
Based on this app, we have extended it to enable interpretation of the AI model. This technique is called Explainable AI (XAI).
If you want, you can try to reproduce the steps above with this other repo.
You can click the badge to access the deployed app on Hugging Face:
The code in this repository is released under MIT license.