Skip to content

andreped/INF1600-ai-workshop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

title colorFrom colorTo sdk app_port emoji pinned license app_file
ViT: Image Classifier
indigo
indigo
gradio
7860
🔥
false
mit
app.py
example_3

INF-1600 AI Deployment workshop

license

This workshop was developed for the "Intro to Artificial Intelligence" course at UiT: The Arctic University of Norway in collaboration with Sopra Steria.

In this workshop, you will get hands on experience with:

And of course, all of this completely FOR FREE!

Some truely amazing saliency maps were submitted from the students. The submitted images are made available in Releases.

Workshop Organizers

Demo

Screenshot 2023-11-12 at 22 22 52

Getting Started

  1. Make your first GitHub account by going here and signing up (see top right of website).

  2. After logging in, make a copy of the repository by making a fork (click the fork button, choose your user as owner and click create fork).

  3. Now you are ready to clone your own fork to a laptop by opening a terminal and running (remember to replace <username> with your own GitHub user name):

git clone https://github.com/<username>/INF1600-ai-workshop.git

Move into the new directory:

cd INF-1600-ai-workshop
  1. After cloning, go inside the repository, and from the terminal run these lines to create a virtual environment and activate it:
python3 -m venv venv/
source venv/bin/activate

On Windows, to activate the virtual environment, run ./venv/Scripts/activate instead of the source command.

  1. Install dependencies to the virtual environment by:
pip install -r requirements.txt
  1. To test if everything is working, try to run the following command to launch the web server:
python app.py
  1. You can then access the web app by going to http://127.0.0.1:7860 in your favourite web browser.

  2. From the prompted website, try clicking one of the image examples and clicking the orange Submit button. The model results should show on the right after a few seconds.

  3. Try accessing this address from your mobile phone.

  4. This should not work, to access the app from a different device, you need to serve it. Try setting share=True in the interface.launch() call in the app.py script. When running app.py now, you should be given a different web address. Try using that one instead on your mobile device.

But of course, hosting the app yourself from your laptop is not ideal. What if there was some alternative way to do this without using your own device completely for free...

  1. Click here to go to the Hugging Face sign up page and make an account.

  2. After making an account and logging in, click the + New button on the left of the website and choose Space from the dropdown.

  3. In the Create a new Space tab, choose a Space name for the app, choose a License (preferably MIT), among the Space SDKs choose Gradio, and finally, click Create Space.

We are now given the option to manually add the relevant files, but that is boring... Let's instead try to setup a robot that does that for us!

  1. From the Hugging Face website here, click on your user badge (top right), and from the dropdown click Settings. On the left hand side of Settings site, click Access Tokens, and then click New Token. Set the name HF_TOKEN, set permissions to write, and click Generate a token.

  2. Then you need to make the same token available in your GitHub fork. Go to your fork, go the the repo Settings > Secrets and variables > Actions and click the green New repository secret. Set HF_TOKEN as name and copy the TOKEN you created previously on Hugging Face by going to Hugging Face > Settings > Access Tokens > Select token > Click show.

  3. On your laptop, open the file located at .github/workflows/deploy.yml, and at the last line, replace the andreped and andreped/ViT-ImageClassifier phrases with your own Hugging Face user and space name.

  4. Setup communication with GitHub and Hugging Face by running the follow in the terminal (replace andreped/ViT-ImageClassifier like in step 16):

git remote add space https://huggingface.co/spaces/andreped/ViT-ImageClassifier
  1. Then push the code to HuggingFace by running to enable syncronization (only needed to do once):
git push --force space main

The first time, you will be promoted to give your username and password. When giving the password, you need to give the HF_TOKEN you defined earlier. Go to Settings > Access Tokens > Select token > Click show.

  1. Then push the code to GitHub by:
git add .
git commit -m "Some changes"
git push
  1. Now go to your GitHub fork (e.g., https://github.com/<username>/INF1600-ai-workshop/) and verify that the code is there.

  2. Then click the Actions tab to see running workflows. Verify that the workflow ran successfully by clicking the current run and checking workflow status.

  3. Finally, we can then head over to our Hugging Face space and check if everything is working. My own app is hosted at https://huggingface.co/spaces/andreped/ViT-ImageClassifier.

Bonus task for speedy bois and gals

Based on this app, we have extended it to enable interpretation of the AI model. This technique is called Explainable AI (XAI).

If you want, you can try to reproduce the steps above with this other repo.

You can click the badge to access the deployed app on Hugging Face:

282330601-1fe47bb5-625d-4717-9348-53930d5129dc

License

The code in this repository is released under MIT license.