Skip to content
No description, website, or topics provided.
Branch: master
Clone or download
Pull request Compare This branch is 6 commits ahead of vmm:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

LyricsCrawler (for Metrolyrics and Lyricswiki)

Automatically downloads lyrics for the top X songs of each artist on the metrolyrics database, and saves them into a CSV along with the artist, song name, and URL.

Currently filters out non-English lyrics using langid, but the specific language can be easily adapted. Also filters out broken lyrics.

Supports resuming the download after application crash or internet failure. Built on the basic example at

Usage and requirements

Required packages

The web crawler is based on python Scrapy project - find documentation from

  1. Install Scrapy:
pip install scrapy
  1. For language filtering, langid is used ( Install langid as follows:
pip install langid

Settings (optional)

Open the file inside the metrolyrics folder to change the settings. In particular, the number of concurrent requests, the path for the lyrics output, and the number of songs crawled for each artist can be set.


  1. Starting the crawler:
# Go to the scrapy project top level folder
cd crawlers

# Run the spider - output the results as simple csv file with lyrics, lyrics URL, song name, and artist name as columns.
scrapy crawl metrolyrics -s JOBDIR=jobstate

The current crawler state is saved for later resuming. By default, results are saved in "lyrics.csv" in the same folder.

  1. Resuming the crawler:

Make sure your previous lyrics file is still in the same location, then execute

scrapy crawl metrolyrics -s JOBDIR=jobstate

The newly downloaded lyrics should now be appended to the already existing ones.

You can’t perform that action at this time.