Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Model-VIT.ipynb #27

Merged
merged 1 commit into from
Dec 4, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions data_and_models/Model-VIT.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -31,8 +31,8 @@
"\n",
"### Setup Colab environment\n",
"\n",
"If you installed the packages and requirements on your own machine, you can skip this section and start from the import section.\n",
"Otherwise, you can follow and execute the tutorial on your browser. In order to start working on the notebook, click on the following button, this will open this page in the Colab environment and you will be able to execute the code on your own.\n",
"If you installed the packages and requirements on your machine, you can skip this section and start from the import section.\n",
"Otherwise, you can follow and execute the tutorial on your browser. To start working on the notebook, click on the following button. This will open this page in the Colab environment and you will be able to execute the code on your own.\n",
"\n",
"<a href=\"https://colab.research.google.com/github/HelmholtzAI-Consultants-Munich/Zero2Hero---Introduction-to-XAI/blob/Juelich-2023/data_and_models/Model-VIT.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
Expand All @@ -41,13 +41,13 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Now that you opened the notebook in Colab, follow the next step:\n",
"Now that you opened the notebook in Google Colab, follow the next step:\n",
"\n",
"1. Run this cell to connect your Google Drive to Colab and install packages\n",
"2. Allow this notebook to access your Google Drive files. Click on 'Yes', and select your account.\n",
"3. \"Google Drive for desktop wants to access your Google Account\". Click on 'Allow'.\n",
" \n",
"At this point, a folder has been created in your Drive and you can navigate it through the lefthand panel in Colab, you might also have received an email that informs you about the access on your Google Drive."
"At this point, a folder has been created in your Drive, and you can navigate it through the lefthand panel in Colab. You might also receive an email that informs you about the access on your Google Drive."
]
},
{
Expand Down Expand Up @@ -145,7 +145,7 @@
"\n",
"At the beginning of the process, a learnable embedding (the [CLS] token) is prepended to the sequence of embedded image patches. This token is initialized randomly and is trained along with the rest of the model.\n",
"\n",
"The role of the [CLS] token is to serve as a representation of the entire image. Over the course of training, it learns to capture the global context of the image, which is crucial for classification tasks."
"The role of the [CLS] token is to represent the entire image. Throughout training, it learns to capture the global context of the image, which is crucial for classification tasks."
]
},
{
Expand All @@ -154,7 +154,7 @@
"source": [
"## 4. **Transformer Encoder**\n",
"\n",
"The encoder part of the ViT like the tranditional transformers consists of the following key components:\n",
"The encoder part of the ViT, like the traditional transformers consists of the following key components:\n",
"\n",
"- **Multi-Head Self-Attention (MHA):** Enables the model to focus on different parts of the image, capturing both local and global information.\n",
"\n",
Expand Down