Python Bootcamp for Learning Analytics Practioners
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
LICENSE Initial commit Oct 21, 2018
README.md Update README.md Jan 3, 2019

README.md

The 9th International

Learning Analytics & Knowledge Conference

Tempe, Arizona

March 4-8, 2019 #LAK19

Python Bootcamp for Learning Analytics Practioners

Instructors: Alfred Essa, Lalitha Agnihotri, Neil Zimmerman, Ani Aghabagyan

Assistants: Kim Pham, Eddie Lin

Note: Course Materials, including detailed syllabus and Jupyter Notebooks will be available at the Course Wiki.

Summary: The hands-on tutorial will provide a rigorous introduction to python for learning analytics practitioners. The intensive tutorial consists of five parts: a) basic and intermediate python; b) statistics and visualization; c) machine learning d) causal inferencing and d) deep learning. The tutorial will be motivated throughout by educational datasets and examples. The aim of the tutorial is to provide a thorough introduction to computation and statistical methodologies in modern learning analytics.

1.1 Python. Python is the de facto language for scientific computing and one of the principal languages, along with R, for data science and machine learning. Along with foundational concepts such as data structures, functions, and iteration we will cover intermediate concepts such as comprehensions, collections, generators, map/filter/reduce, and object orientation. Special emphasis will be given to coding in “idiomatic Python”.

1.2 Exploratory Data Analysis, Statistics. In this section we will introduce the core python libraries for exploratory data analysis and basic statistics: numpy, pandas, matplotlib and seaborn. We will use the Jupyter Notebook environment for interactive data analysis, annotation, and collaboration. Exploratory data analysis is a foundational step for deriving insights from data. It also serves as a prelude to building formal models and simulations.

1.3 Machine Learning. In this section we will introduce participants to basic machine learning concepts and their application using the scikit-learn library. We will show how to predict continuous and categorical outcomes, for example, using linear and logistic regression. This demonstration will show how to create an entire prediction pipeline from scratch, starting from loading in data, cleaning and standardizing it, building the model, and demonstrating its validity through cross- validation. Some discussion of what an educator might do with such a model will be included.

1.4 Causal Inferencing. In this section of the tutorial we build on our statistical understanding of correlation to study causality. Randomized control trials (RCTs) are considered the gold standard in efficacy studies because they aim to establish causality of interventions. But RCTs are very often impractical to carry out and have other limitations. Causal inference from Observational Studies (OS) is another form of statistical analysis to evaluate intervention effects. In causal inference, the causal effect of an intervention on a particular outcome is studied using observed data, without the need for randomization in advance. In this tutorial, we will show design of an OS to leverage the large amounts of data available through online learning platforms and student information systems to draw causal claims about their effectiveness.

1.5 Deep Learning. In this section we introduce how to build deep learning models. Deep learning is one of the fastest growing areas of machine learning and is particularly well suited for very large datasets. We begin by building a toy deep learning model by scratch in python. This is to understand the five foundational concepts of deep learning: neurons as the atomic computational unit of deep learning networks; neurons as organized in stacked layers to achieve increasingly abstract data representations; forward propagation as the end-to-end computational process for generating predictions; loss and cost functions as the method for quantifying the error between prediction and ground truth; and back propagation as the computational process for systematically reducing the error by adjusting the network’s parameters. After developing a conceptual understanding of deep learning, we apply some standard Python libraries such as Keras, PyTorch, and TensorFlow to build deep learning models.