Skip to content
Go to file

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

The Natural Language Processing Workshop

GitHub issues GitHub forks GitHub stars PRs Welcome versions

This is the repository for The Natural Language Processing Workshop, published by Packt. It contains all the supporting project files necessary to work through the course from start to finish.

Requirements and Setup

The Natural Language Processing Workshop

To get started with the project files, you'll need to:

  1. Install Jupyter on Windows, Mac, Linux
  2. Install Anaconda on Windows, Mac, Linux

Prerequisites for The-Natural-Language-Processing-Workshop

  1. Download and Install Python using Anaconda Distribution

  2. Create a virtual environment by any of the following command:

    conda create -n nlp-env python=3.7 (If using Anaconda distribution)

    conda activate nlp-env


    python -m venv nlp-env

    .\nlp-env\Scripts\activate (Windows)

    source nlp-env/bin/activate (Linux or macOS)

  3. Install all the required packages by running the following command

    "pip install -r requirements.txt"

  4. Download all the NLTK packages using the following command:

  5. Download the SpaCy model using the following command: python -m spacy download en_core_web_sm

About The Natural Language Processing Workshop

Do you want to learn how to communicate with computer systems using Natural Language Processing (NLP) techniques, or make a machine understand human sentiments? Do you want to build applications like Siri, Alexa, or chatbots, even if you’ve never done it before?

With The Natural Language Processing Workshop, you can expect to make consistent progress as a beginner, and get up to speed in an interactive way, with the help of hands-on activities and fun exercises.

What you will learn

  • Obtain, verify, clean and transform text data into a correct format for use
  • Use methods such as tokenization and stemming for text extraction
  • Develop a classifier to classify comments in Wikipedia articles
  • Collect data from open websites with the help of web scraping
  • Train a model to detect topics in a set of documents using topic modeling
  • Discover techniques to represent text as word and document vectors

Related Workshops

If you've found this repository useful, you might want to check out some of our other workshop titles:


No description, website, or topics provided.




No releases published


No packages published
You can’t perform that action at this time.