Scrapy spider middleware to clean up query parameters in request URLs
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.



This is a Scrapy spider middleware to clean up the request URL GET query parameters at the output of the spider in accordance with the patterns provided by the user.


Install scrapy-querycleaner using pip:

$ pip install scrapy-querycleaner


  1. Add QueryCleanerMiddleware by including it in SPIDER_MIDDLEWARES in your file:

        'scrapy_querycleaner.QueryCleanerMiddleware': 100,

    Here, priority 100 is just an example. Set its value depending on other middlewares you may have enabled already.

  2. Enable the middleware using either QUERYCLEANER_REMOVE or QUERYCLEANER_KEEP (or both) in your


At least one of the following settings needs to be present for the middleware to be enabled.


You can specify a list of parameter names by using the | (OR) regex operator.

For example, the pattern search|login|postid will match query parameters search, login and postid. This is by far the most common usage case.

And by setting QUERYCLEANER_REMOVE value to .* you can completely remove all URL query parameters.

Supported settings

a pattern (regular expression) that a query parameter name must match in order to be removed from the URL. (All the others will be accepted.)
a pattern that a query parameter name must match in order to be kept in the URL. (All the others will be removed.)

You can combine both if some query parameters patterns should be kept and some should not.

The remove pattern has precedence over the keep one.


Let's suppose that the spider extracts URLs like:

and we want to leave only the parameter pid.

To achieve this objective we can use either QUERYCLEANER_REMOVE or QUERYCLEANER_KEEP:

  • In the first case, the pattern would be cid|ttda:

    QUERYCLEANER_REMOVE = 'cid|ttda'
  • In the second case, pid:


The best solution depends on a particular case, that is, how the query filters will affect any other URL that the spider is expected to extract.