Skip to content

Work with academic staff and students who need some extra support with skills such as Data Science, Artificial Intelligence, Machine Learning, Deep Learning, Natural Language Processing and mathematical concept behind all those terms.

Notifications You must be signed in to change notification settings

ghimiresunil/Graduate-Teaching-Assistant-AI-and-ML

Repository files navigation

Graduate Teaching Assistant AI and ML

Brief Introduction

I am Graudate Teaching Assistant to a senior course on Artificial Intelligence and Machine Learning for 100 - 120 students. Senior Course on Artificial Intelligence and Machine Learning was one of the more interesting course. The teacher was awesome and gave a lot of interesting assignments. It was up to to the GTA to discuss these ideas, elaborate concepts and guide the students. Most of the effort was in maintaining office hours.

Students were quite matured by then and didn't expect a lot from the GTA. I never had a single soul turn up for the office hours. Grading was quite simple too. I could get the entire work done in 2 hours/week.

My reponsibilities towards graduate teaching assistant

  • Deliver classes – Support (and sometimes lead) seminars and tutorials.
  • Supervise projects – Assist undergraduate and postgraduate students working on their final year projects.
  • Support learning – Support students with the technical aspects of their course.
  • Provide feedback – Provide students with valuable and timely feedback to aid in their development.
  • Review work – Participate in the assessment process by marking work.
  • Give demonstrations – Deliver demonstrations for practical work, advising on the required skills, methods and techniques.
  • Support fieldwork – Assist in the preparation and delivery of fieldwork.
  • Be effective – Familiarise themselves with course material, continually develop their skills and undertake training to enable the best support for students.
  • Pastoral care – Direct students to support facilities provided by the university based on their personal needs.

Additional Reponsibilities

  • Participate in the assessment process by invigilating exams.
  • Help develop, update and gather teaching material to support the development of the course curriculum.
  • Take on limited administrative responsibilities as requested by the Head of Department.
  • Provide in-person or email support to assist with student enquiries during the exam period.

Benefits of becoming a Graduate Teaching Assistant

GTA give me an opportunity to extend my knowledge and acquire new skills. These include teaching, communicating and the ability to breakdown complex theories in a way which can be easier understood.

Besides this, teaching students is an excellent way to develop my abilities. It helps me to consolidate my existing technical knowledge, gain knowledge in new topics and get valuable hands-on experience in teaching. This is a significant advantage. Undertaking the duties of a GTA adequately helps me prepare for a career as a daily learner, and provide me with a strong foundation that can use to sell myself when applying to new job as Data Science developer.

Added several articles to learn more

Week 01

Before moving to course "Articial Intelligence and Machine Learning", I have added 3 articles based on History of Artificial Intelligence, Introduction to Artificial Intelligence and Machine Learning, and Performance Metrics in Machine Learning Classification Model.

Week 02

Before starting tutorials on machine learning, I came with the idea of providing a brief definition of key terms used in Machine Learning. These key terms will be regularly used in our coming lectures, tutorials, and workshops on machine learning and will also be used in further higher courses.

𝐀 𝐛𝐞𝐠𝐢𝐧𝐧𝐞𝐫’𝐬 𝐠𝐮𝐢𝐝𝐞 𝐨𝐧 𝐡𝐨𝐰 𝐭𝐨 𝐜𝐚𝐥𝐜𝐮𝐥𝐚𝐭𝐞 𝐏𝐫𝐞𝐜𝐢𝐬𝐢𝐨𝐧, 𝐑𝐞𝐜𝐚𝐥𝐥, 𝐅𝟏-𝐬𝐜𝐨𝐫𝐞 𝐟𝐨𝐫 𝐚 𝐦𝐮𝐥𝐭𝐢-𝐜𝐥𝐚𝐬𝐬 𝐜𝐥𝐚𝐬𝐬𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐩𝐫𝐨𝐛𝐥𝐞𝐦
VERY IMPORTANT 😊😊

Week 03

I am going to discuss Performance Metrics, and this time it will be Regression model metrics. As in my previous artcle, I have discussed Classification Metrics, this time its Regression. I am going to talk about the 5 most widely used Regression metrics:

Similarly, I am going to cover concrete definition of Linear Regression and Logistics Regression.

Week 04

If you are building your own neural network, you will definitely need to understand how to train it. So, I have added step by step guide to understand how Back Propagation used in ANN model.

Backpropagation is a commonly used technique for training neural network. There are many resources explaining the technique, but this article will explain backpropagation with concrete example in a very detailed colorful steps.

Implementation of K-Nearest Neighbors from scratch

I have added article based on how the KNN algorithm operates and how it can be applied using Python. There are several different reasons why implementing algorithms from scratch can be useful:

  • It can help us to understand the inner works of an algorithm
  • We could try to implement an algorithm more efficiently
  • we can add new features to an algorithm or experiment with different variations of the core idea

Week 06

So far, we have covered all about Machine Learning, we have written an AI program based on regression and classification programs, we have learned about MSE, MAE, RMSE, R^2, and Confusion Matrix. In our last tutorial, we learned about Deep Learning and concepts based on Deep Learning.

In this article, we are going to learn about CNN which is an efficient recognition algorithm that is widely used in pattern recognition and image processing. We will learn about the CNN model summary which includes Filter (KERNEL/ FEATURE DECODER), Stride, Convolution Layer, Pooling Layer and Flatten Layer.

Week 07

Language is the most important tool of communication invented by human civilization. It is either spoken or written, consisting of the use of words in a structured and conventional way. Language helps us share our thoughts, and understand others.

Natural Language Processing is, a form of artificial intelligence, all about trying to analyze and understand either written or spoken language and the context that it’s being used in. The ultimate objective of NLP is to read, decipher, understand, and make sense of the human languages in a manner that is valuable.

In this session, I will covering a basic course on Natural Language Processing.

R^2 and Karl Pearson Correlation Coefficient

For the concrete definition of R^2 square and Karl Pearson Correlation Coefficient, I have added math behind R^2 and Karl Pearson Correlation Coefficient and its calculation.

  • R-squared will give you an estimate of the relationship between movements of a dependent variable based on an independent variable's movements.
  • Pearson's correlation coefficient is the test statistics that measures the statistical relationship, or association, between two continuous variables.

** VERY IMPORTANT 😊😊 **

Designed Mock Test Question Set (A, B) and added answer of Multiple Choice questions with proper explanation

For a more concrete and detailed explanation, I have added answers to multiple-choice questions with the best explanation which helps all to learn more and gain more.

PCA (Find Eigenvalues and Eigenvectors from the given data point)

I would like to say a few things about Vignesh Natarajan's answer first:

The curse of dimensionality is not about having a large number of dimensions, is about having an algorithm that struggles in a large number of dimensions or in more general terms a bad combination of algorithm/dimensionality for whatever reason. Some algorithms perform very very well in millions of dimensions, like Perceptron and Linear SVM.

What Vignesh describes to reduce dimensionality is known as PCA (Principal Component Analysis) a technique that is exactly the same as computing the SVD (Singular value decomposition) of your data matrix. The clarification I want to make is that with PCA you don't discover the principal dimensions of your data, you discover the principal components. And each component is a linear combination of your dimensions. So you can't use PCA or SVD to know if your "age" column plays a higher role than "price" but you can use it to effectively reduce the number of dimensions in your data when you need it. Using PCA or SVD just because is not a good practice.

Vignesh is absolutely right about the importance of Eigenvectors and Eigenvalues as a way to change the dimensionality of your data. They are the key to SVD and PCA.

Just to add something to the original answer about eigenvectors and eigenvalues in Machine Learning they are also used in Spectral Clustering.

Happy Maths, Happy AI

Final Exam Solution

Finally, I have added all the answers to the final exam question prepared by University(College).

That’s all.
You may be interested to read see yourself in Artificial Intelligence and Machine Learning.
Thanks for reading.

About

Work with academic staff and students who need some extra support with skills such as Data Science, Artificial Intelligence, Machine Learning, Deep Learning, Natural Language Processing and mathematical concept behind all those terms.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published