Skip to content
A Code-First Introduction to NLP course
Branch: master
Clone or download
Latest commit 85e5052 Jul 8, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
images updates, removing old nb 7 Jul 1, 2019
.gitignore initial commit May 14, 2019
0-course-logistics.ipynb updates to seq2seq Jun 13, 2019
1-what-is-nlp.ipynb updates from jeremy Jun 4, 2019
2-svd-nmf-topic-modeling.ipynb updates from jeremy Jun 4, 2019
2b-odds-and-ends.ipynb updates from jeremy Jun 4, 2019
3-logreg-nb-imdb.ipynb changes to transformer Jun 25, 2019
3b-more-details.ipynb removed inaccurate link Jul 8, 2019
4-regex.ipynb updates from jeremy Jun 4, 2019
5-nn-imdb.ipynb vietnamese Jun 8, 2019
6-rnn-english-numbers.ipynb added RNNs slides Jun 11, 2019
6b-rnn-english-numbers-GRU.ipynb removed inaccurate link Jul 8, 2019
7-seq2seq-translation.ipynb updates to seq2seq Jun 13, 2019
7b-seq2seq-attention-translation.ipynb updates, removing old nb 7 Jul 1, 2019
7b-seq2seq-nucleus.ipynb adding transformer Jun 20, 2019
8-translation-transformer.ipynb updates, removing old nb 7 Jul 1, 2019
Attention.pptx updating file names to clean up github Jul 1, 2019
README.md Update README.md Jul 8, 2019
RNNs.pptx updating file names to clean up github Jul 1, 2019
bleu_metric.ipynb updates to seq2seq Jun 13, 2019
britlit.xlsx initial commit May 14, 2019
conv-example.xlsx added conv-example xlsx Jun 9, 2019
hw1.ipynb updates to 2b and hw1 May 23, 2019
hw2.ipynb adding hw2, nbs 6 and 7 Jun 10, 2019
naive_bayes.xlsx adding naive bayes xlsx May 29, 2019
nlputils.py turkish Jun 11, 2019
nn-english.ipynb en Jun 11, 2019
nn-imdb-more.ipynb added RNNs slides Jun 11, 2019
nn-turkish.ipynb adding transformer Jun 20, 2019
nn-vietnamese-bwd.ipynb added RNNs slides Jun 11, 2019
nn-vietnamese.ipynb added RNNs slides Jun 11, 2019
review-cv-transfer.ipynb changes to transformer Jun 25, 2019
review-nlp-transfer.ipynb changes to transformer Jun 25, 2019
seq2seq.py nucleus Jun 13, 2019

README.md

A Code-First Intro to Natural Language Processing

You can find out about the course in this blog post and all lecture videos are available here.

This course was originally taught in the University of San Francisco's Masters of Science in Data Science program, summer 2019. The course is taught in Python with Jupyter Notebooks, using libraries such as sklearn, nltk, pytorch, and fastai.

Table of Contents

The following topics will be covered:

1. What is NLP?

  • A changing field
  • Resources
  • Tools
  • Python libraries
  • Example applications
  • Ethics issues

2. Topic Modeling with NMF and SVD

  • Stop words, stemming, & lemmatization
  • Term-document matrix
  • Topic Frequency-Inverse Document Frequency (TF-IDF)
  • Singular Value Decomposition (SVD)
  • Non-negative Matrix Factorization (NMF)
  • Truncated SVD, Randomized SVD

3. Sentiment classification with Naive Bayes, Logistic regression, and ngrams

  • Sparse matrix storage
  • Counters
  • the fastai library
  • Naive Bayes
  • Logistic regression
  • Ngrams
  • Logistic regression with Naive Bayes features, with trigrams

4. Regex (and re-visiting tokenization)

5. Language modeling & sentiment classification with deep learning

  • Language model
  • Transfer learning
  • Sentiment classification

6. Translation with RNNs

  • Review Embeddings
  • Bleu metric
  • Teacher Forcing
  • Bidirectional
  • Attention

7. Translation with the Transformer architecture

  • Transformer Model
  • Multi-head attention
  • Masking
  • Label smoothing

8. Bias & ethics in NLP

  • bias in word embeddings
  • types of bias
  • attention economy
  • drowning in fraudulent/fake info

Why is this course taught in a weird order?

This course is structured with a top-down teaching method, which is different from how most math courses operate. Typically, in a bottom-up approach, you first learn all the separate components you will be using, and then you gradually build them up into more complex structures. The problems with this are that students often lose motivation, don't have a sense of the "big picture", and don't know what they'll need.

Harvard Professor David Perkins has a book, Making Learning Whole in which he uses baseball as an analogy. We don't require kids to memorize all the rules of baseball and understand all the technical details before we let them play the game. Rather, they start playing with a just general sense of it, and then gradually learn more rules/details as time goes on.

If you took the fast.ai deep learning course, that is what we used. You can hear more about my teaching philosophy in this blog post or this talk I gave at the San Francisco Machine Learning meetup.

All that to say, don't worry if you don't understand everything at first! You're not supposed to. We will start using some "black boxes" and then we'll dig into the lower level details later.

To start, focus on what things DO, not what they ARE.

You can’t perform that action at this time.