A Python script to scrape Google Maps and export business data to CSV. Get names, addresses, phone numbers, ratings, websites, and opening hours for any city - ready for lead generation, market research, or competitive analysis. No browser or Selenium needed.
Disclaimer: This tool is provided for educational and research purposes only. By using this Google Maps Scraper, you agree to comply with local and international laws regarding data scraping and privacy. The authors and contributors are not responsible for any misuse of this software. This tool should not be used to violate the rights of others, or for unethical purposes.
- Google Maps Scraper Features
- What Data Can You Extract?
- What Can I Do with Google Maps Data?
- Quick Start
- Set Up with AI
- Proxy Setup
- Usage
- How the Google Maps Scraper Works
- FAQ
- Grid-based city coverage - divides any city into a grid of small cells to capture every business, not just the first page of results
- CSV export - clean output with 9 data fields per business, ready for spreadsheets, CRMs, or databases
- Residential proxy resilience - 7 retries with exponential backoff, circuit breaker, automatic health checks, and failed cell retry queue
- Checkpoint and resume - interrupted scrapes pick up exactly where they left off
- Cross-cell deduplication - businesses that appear in overlapping cells are only counted once
- Anti-detection - header rotation, session management, and adaptive delays between requests
Real data extracted from scraping Google Maps for barbershops in New York City:
| name | address | phone | category | rating | website |
|---|---|---|---|---|---|
| ELITE BARBERS NYC | 782 Lexington Ave, New York, NY 10065 | (212) 308-6660 | Barber shop | 4.9 | elitebarbersnyc.com |
| Pall Mall Barbers Midtown NYC | 10 Rockefeller Plaza, New York, NY 10020 | (212) 586-2220 | Barber shop | 4.8 | pallmallbarbers.nyc |
| Ray's Barber Shop Tribeca | 46 Park Pl, New York, NY 10007 | (646) 828-1052 | Barber shop | 4.9 | rays.brbrshop.com |
Each business is exported with 9 data fields. Here is a complete sample row:
{
"name": "ELITE BARBERS NYC",
"address": "782 Lexington Ave, New York, NY 10065",
"phone": "(212) 308-6660",
"category": "Barber shop",
"rating": "4.9",
"review_count": "",
"google_maps_url": "https://www.google.com/maps/place/?q=place_id:ChIJkebr65RZwokRI_QVOKhVN-k",
"website": "https://elitebarbersnyc.com/",
"opening_hours": "Open - Closes 7 PM"
}This Google Maps scraper is built for anyone who needs local business data at scale:
- Lead generation - build targeted prospect lists for sales outreach. Scrape every business in a category across an entire city, complete with phone numbers, websites, and addresses ready to import into your CRM
- Market research - analyze the competitive landscape for any industry in any city. See how many businesses operate, where they cluster, and how they rate
- Local SEO audits - extract Google Maps data to audit local search presence for your clients or competitors
- Data enrichment - enrich existing business databases with phone numbers, websites, ratings, and opening hours pulled directly from Google Maps
- Sales enablement - gather intel on prospects' locations, ratings, and online presence before outreach calls
- Content and reporting - create data-driven market reports, location analyses, or industry comparisons backed by real Google Maps data
git clone https://github.com/worldscraping/google-maps-scraper.git
cd google-maps-scraper
pip install -r requirements.txtThis scraper requires a residential proxy from MagneticProxy to work reliably. See the Proxy Setup section below for step-by-step instructions.
python main.py --query "barbershops" --city "New York City, United States" --lang enThe scraper creates a CSV file in the current directory (e.g., results_20260415_180000.csv). Open it in any spreadsheet app or import it into your CRM.
Already using Claude, ChatGPT, or Codex? Paste this prompt and let the AI do the setup for you. Just replace the placeholder values:
Clone the Google Maps scraper from https://github.com/worldscraping/google-maps-scraper
and set it up. My MagneticProxy credentials are:
- Username: YOURUSERNAME
- Password: YOURPASSWORD
Then scrape all [BUSINESS TYPE] in [CITY, COUNTRY] and export the results to CSV.
That's it. The AI will install dependencies, create the .env file, run the scraper, and show you the results.
Don't have proxy credentials yet? Follow the Proxy Setup steps below to get them in 2 minutes.
This scraper uses MagneticProxy residential proxies to route requests through real residential IPs. This is what prevents blocks and CAPTCHAs when scraping Google Maps at scale.
-
Create an account at magneticproxy.com and sign up.
-
Choose a Residential plan. You can start with the smallest plan for just $1 to test the scraper (at the time of writing, there is a
firstpurchasecoupon that saves $4 on any plan). For real scraping, I recommend at least 10 GB of bandwidth. A typical city scrape uses 1-3 GB depending on the city size and number of results, while a large metro like New York or Los Angeles can use 3-5 GB. -
Get your credentials. After purchasing, go to My Proxies in your dashboard. You will find your proxy username and password there.
-
Create your
.envfile. Copy the example file and paste your credentials:
cp .env.example .envThen edit .env:
MAGNETIC_USERNAME=yourusername
MAGNETIC_PASSWORD=yourpassword
- Verify the connection. Run any scrape command. The scraper runs a proxy health check at startup and will tell you immediately if the credentials are wrong or the proxy is unreachable.
python main.py --query QUERY --city CITY [options]
| Flag | Default | Description |
|---|---|---|
--query |
(required) | Search term (e.g., "barbershops", "restaurants", "dentists") |
--city |
(required) | City name with country (e.g., "New York City, United States") |
--output |
results_{timestamp}.csv |
Output CSV file path |
--cell-size |
2.0 |
Grid cell size in km. Smaller = more thorough, slower |
--max-results |
unlimited | Stop after collecting N results |
--resume |
off | Resume from a previous checkpoint |
--delay-min |
2.0 |
Minimum delay between requests (seconds) |
--delay-max |
5.0 |
Maximum delay between requests (seconds) |
--lang |
es |
Language for results (es, en, pt, etc.) |
--proxy-country |
auto-detect | Force proxy exit country (e.g., us, co, mx) |
--verbose |
off | Show detailed debug logs |
Scrape all barbershops in New York City for lead generation:
python main.py --query "barbershops" --city "New York City, United States" --lang enScrape restaurants in Los Angeles with a smaller grid for thorough coverage:
python main.py --query "restaurants" --city "Los Angeles, United States" --lang en --cell-size 1.0Resume an interrupted scrape:
python main.py --query "barbershops" --city "New York City, United States" --lang en --resumeGoogle Maps limits search results to roughly 200 businesses per query, no matter how many actually exist in an area. This scraper extracts Google Maps data beyond that limit using a grid-based approach:
- Geocode the city name into a bounding box using OpenStreetMap
- Generate a grid of overlapping cells (default 2 km each) that covers the entire city
- Scrape each cell independently with pagination (up to 10 pages of 20 results per cell)
- Deduplicate results across cells - businesses near cell borders appear in multiple cells but are only kept once
- Export to CSV - results are written progressively, so you get partial data even if the scrape is interrupted
The scraper is built for large-scale, uninterrupted data extraction:
- Each request retries up to 7 times with exponential backoff (up to ~10 minutes of retry window)
- A circuit breaker automatically detects connectivity issues and runs a health check before continuing
- Failed cells are retried at the end of the main scraping pass instead of being permanently skipped
- The scraper saves a checkpoint after every cell, so you can resume from exactly where you left off with
--resume
A typical city scrape uses 1-3 GB depending on the city size and how many businesses match your query. A small town might use under 500 MB, while a large metro like New York City can use 3-5 GB. I recommend starting with a 10 GB plan from MagneticProxy to have enough room for multiple scrapes.
Yes. This is one of the most common use cases. Scrape every business in a specific category (e.g., barbershops, dentists, restaurants) across an entire city and export the results to CSV. You get business names, phone numbers, websites, and addresses - everything you need to build a targeted prospect list and import it into your CRM or outreach tool.
Yes. This scraper uses residential proxies (real IPs from real devices), rotates sessions and browser headers on every request, and adds randomized delays between requests. If a CAPTCHA is detected, the scraper automatically waits, rotates to a new IP, and retries.
Each business in the output CSV includes: name, address, phone number, category, rating, review count, Google Maps URL, website, and opening hours.
Yes. The scraper saves a checkpoint file after every cell. If the process is interrupted (Ctrl+C, network drop, machine restart), run the same command with --resume and it picks up from the last completed cell. No data is lost.
Run the scraper with --query for the business type and --city for the location. For example: python main.py --query "restaurants" --city "Chicago, United States" --lang en. The scraper geocodes the city, generates a grid, and scrapes every matching business within the city limits. Any city in the world that appears on Google Maps is supported.
Any city in the world that appears on Google Maps. The proxy exit country is auto-detected from the city name for better results, but you can override it with --proxy-country. The --lang flag controls the language of the returned data.
Built for scraping Google Maps at scale without getting blocked. If you find this useful, star the repo.