Skip to content

MalaPetra/jupyter-course

 
 

Repository files navigation

Binder

Reproducible and Interactive Data Science

Syllabus

The aim of this course is to introduce students to the Jupyter Notebook which is an open-source software that allows you to create and share documents that contain live code, equations, visualizations, and explanatory text. Uses include: data cleansing and manipulation, numerical simulations, statistical modeling, machine learning, and much more. Through the notebooks, research results and the underlying analyses can be transparently reproduced as well as shared. As an example, see this Notebook on gravitational waves published in Physical Review Letters.

During three days with alternating video lectures (Intro & Widgets, Libraries, ATLAS Dijet) and hands-on exercises, the participants will learn to construct well-documented, electronic notebooks that perform advanced data analyses and produce publication ready plots. While the course is based on Python, this is not a prerequisite since the Jupyter Notebook supports many programming languages. The name Jupyter itself stands for Julia, Python, and R, the main languages of data science.

Credits

4 ECTS.

Program

Sessions on December 3-5, 2018 from 10:15 to 15:00 and project presentations on January 14–15, 2019 from 10:15 to 12:00.

Location:

  • December 3, 2018: Sal D L315, Fysicum, Sölvegatan 14
  • December 4, 2018: lecture hall HUB, Department of Astronomy and Theoretical Physics, Sölvegatan 27
  • December 5, 2018: lecture hall Cassiopeia, Department of Astronomy and Theoretical Physics, Sölvegatan 27
  • January 14, 2019: lecture hall Cassiopeia, Department of Astronomy and Theoretical Physics, Sölvegatan 27
  • January 15, 2019: lecture hall Cassiopeia, Department of Astronomy and Theoretical Physics, Sölvegatan 27

The course consists of five full days: three with alternating video lectures (Intro & Widgets, Libraries, ATLAS Dijet) and hands-on exercises, and two days with project presentations. The notebooks shown in the video lectures are available on this site in the lectures folder.

Prerequisites

  • No prior knowledge in Python is required, but familiarity with programming concepts is helpful.
  • A laptop connected to the internet (eduroam, for example) and running Linux, MacOS, or Windows and with Anaconda installed, see below.
  • Earphones for silently watching lectures during the sessions.

If you have little experience with Python or shell programming, the following two tutorials may be helpful:

Preparation Before the First Session

  1. Watch the video lectures (Intro & Widgets, Libraries, ATLAS dijets)

  2. Install miniconda3 alternatively the full anaconda3 enviroment on your laptop (the latter is much larger).

  3. Download the course material (this github repository) and unzip.

  4. Install and activate the LUcompute environment described by the file environment.yml by running the following in a terminal:

    conda env create -f environment.yml
    source activate LUcompute

Instructions for Windows:

  1. Watch the video lectures (Intro & Widgets, Libraries, ATLAS dijets)

  2. Install miniconda3.

  3. Download the course material (this github repository) and unzip.

  4. Open the anaconda prompt from the start menu.

  5. Navigate to the folder where the course material has been unzipped (e.g. using cd to change directory and dir to list files in a folder).

  6. Install and activate the LUcompute environment described by the file environment.yml by running the following in the anaconda prompt:

    conda env create -f environment.yml
    activate LUcompute

Further Information

Project Work

The project work consists of three steps:

  1. Each student will make a Notebook project covering topics from day 1–3 with either:
  • research, presenting data analysis and theory behind a manuscript or published paper. The Notebook should ideally be written such that it can act as supporting information (SI) for a journal. Here's some inspiration.
  • or a Notebook presenting a text-book topic of choice and aimed at students. Here's some inspiration.
  • Deadline for project: January 3, 2019
  1. Each student will upload her/his project on a public GitHub repository created through GitHub Classroom For a brief introduction to git repositories, see here. You can find your repository here, press cancel if an error has occured during importing.
    Notify your referees via email that your notebook is ready to be checked.

  2. A peer-review process where each student reviews and writes comments on two other notebooks by creating issues on the respective GitHub repositories. The review should be based on the criteria listed below. For each point, include specific suggestions for improvements. Deadline for review: January 10, 2019

  3. Notebook presentation to the class (day 4). Maximum 10 minutes per participant.
    The presentations shall serve the purpose to briefly show the workflow of the Notebook. Include the response to the reviewers' comments and highlight the most interesting, original, or advanced features of your Notebook (e.g. the use of a particular library, a certain composite plot, a method to manage references or implement interactivity, or any other feature that you found particularly useful and would like to share).

  4. Save your project when the course has finished as we may delete it before the next course event.

Notebook Requirements

This check list summarizes the minimum requirements for the Notebook project to be approved. It should be used as a reference for both the development of the Notebook and the peer-review process.

  • Documentation:
    • includes rich documentation using Markdown (equations, tables, links, images or videos)
    • includes instructions on how to run the notebook
    • includes the required packages in an environment.yml file
    • is reproducible, i.e., someone else should be able to redo the steps
  • Input/Output:
    • uses pandas to read large data sets or numpy to load data from text files
    • uses pandas to save to disk the processed or generated data
  • Scientific computing/data processing:
    • performs numerical operations (numpy, scipy, pandas) or manipulates, groups, and aggregates a data set (pandas)
  • Data visualization:
    • includes at least one composite plot (inset or multiple panels)
    • produces publication ready quality figures (see here for an editorial guide on Graphical Excellence):
      • the figures are 89 mm wide (single column) or 183 mm wide (double column)
      • the axes are labeled
      • the font sizes are sufficiently large
      • the figures are saved as rasterized images (300 dpi) or vector art
  • Version control, sharing, and archiving:
    • is archived in a repository with a digital object identifier (DOI)

Troubleshooting

If your notebook seems to have an issue on connection, similar to the lines below:

[E 12:18:57.001 NotebookApp] Uncaught exception in /api/kernels/5e16fa4b-3e35-4265-89b0-ab36bb0573f5/channels
 Traceback (most recent call last):
   File "/Library/Python/2.7/site-packages/tornado-5.0a1-py2.7-macosx-10.13-intel.egg/tornado/websocket.py", line 494, in _run_callback
     result = callback(*args, **kwargs)
   File "/Library/Python/2.7/site-packages/notebook-5.2.2-py2.7.egg/notebook/services/kernels/handlers.py", line 258, in open
     super(ZMQChannelsHandler, self).open()
   File "/Library/Python/2.7/site-packages/notebook-5.2.2-py2.7.egg/notebook/base/zmqhandlers.py", line 168, in open
     self.send_ping, self.ping_interval, io_loop=loop,
 TypeError: __init__() got an unexpected keyword argument 'io_loop'
[I 12:18:58.021 NotebookApp] Adapting to protocol v5.1 for kernel 5e16fa4b-3e35-4265

You should either a) downgrade the package "tornado" b) change L178 of the file

[your conda installation location]/miniconda3/envs/LUcompute/lib/python3.6/site-packages/notebook/base/zmqhandlers.py 

from

             self.send_ping, self.ping_interval, io_loop=loop,

into

             self.send_ping, self.ping_interval,

https://stackoverflow.com/questions/48090119/jupyter-notebook-typeerror-init-got-an-unexpected-keyword-argument-io-l

External Resources

  • Cross-language interaction is a striking feature of Jupyter notebooks: The possibility to integrate multiple languages in the same notebook makes it feasible to exploit the best tools of the various languages in the different steps of data analysis. You can read more about it in this post.
  • The Jupyter notebook is a very popular tool for working with data in academia as well as in the private sector.
    • These tutorials show how the LIGO/VIRGO collaboration extensively uses Jupyter notebooks to communicate its research.
    • The streaming service Netflix currently uses Jupyter notebooks as the main tool for data analysis. For example, recommendation algorithms which suggest which movies or TV series to watch next are currently run on Jupyter notebooks. You can read more about it in this post.
    • In 2017 Jupyter received the ACM Software System Award, a prestigious award that it shares with projects such as Unix and the Web.
  • There are many freely available online resources to learn data science.
    • The best resource to find help with programming and scripting is Stack Overflow, which is a question and answer website curated by software developer communities.
    • An excellent book is "Python Data Science Handbook" by Jake VanderPlas which is freely available as Jupyter notebooks at this GitHub page. On the author's webpage, you can also find a list of excellent talks, lectures, and tutorials and a blog.
    • Yet another useful resource is the podcast Data Skeptic which features a collection of entertaining and educational mini-lectures on data science as well as interviews with experts.

About

Jupyter Course

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%