Skip to content

April 2026 Fix#6

Open
bandrehc wants to merge 4 commits intochristivn:mainfrom
bandrehc:main
Open

April 2026 Fix#6
bandrehc wants to merge 4 commits intochristivn:mainfrom
bandrehc:main

Conversation

@bandrehc
Copy link
Copy Markdown
Contributor

@bandrehc bandrehc commented Apr 7, 2026

Google permanently shut down the /localservices/prolist endpoint that this scraper originally used (it now returns HTTP 410 Gone).

The README.md in the repo describes what has been done to fix it.

@christivn
Copy link
Copy Markdown
Owner

christivn commented Apr 10, 2026

Hi @bandrehc, thanks for your pull request.
I've reviewed it, but I'm getting this error, it seems to be flagging it as a bot.

>> python3 mapScraperX.py "dentistas en Madrid" --lang es --country es

Mode: Single query
Processing query: 'dentistas en Madrid'
Scraping 'dentistas en Madrid': 0results [00:00, ?results/s][dentistas en Madrid] Could not find pb= search URL in Maps page. Response may be a consent wall or bot-detection page. HTML snippet: '<!doctype html><html lang="es" dir="ltr"><head><base href="https://consent.google.com/"><link rel="preconnect" href="//www.gstatic.com"><meta name="referrer" content="origin"><script nonce="6wHjLVXUib8Ikssrfbwqug">window[\'ppConfig\'] = {productName: \'ConsentUi\', deleteIsEnforced:  true , sealIsEnforc'
[dentistas en Madrid] No results returned.
Found 0 results for 'dentistas en Madrid'
No results found.

@bandrehc
Copy link
Copy Markdown
Contributor Author

Please pull the code again. I worked on some tweaks to get around some rate limits and got successful results 100% of the times. If the problem continues try using a VPN to discard regional walls.

@christivn
Copy link
Copy Markdown
Owner

@bandrehc How could we merge it with pull request #7 and get the best of both worlds?
The @devsanthoshmk's code runs correctly without a VPN or proxy. Could we improve it with your code?

@devsanthoshmk
Copy link
Copy Markdown
Contributor

@christivn, I and @bandrehc use completely different approach and @bandrehc approach is more optimal i get more number of data from this #6 and it is working for me... i guess my scrapping logic is safer so @bandrehc can i add my logic as a fallback to yours?

@christivn
Copy link
Copy Markdown
Owner

I’m with you on that. The only drawback, in my opinion, is the matter of blocks, which would require the use of proxies and VPNs, because it's easier for Google to detect it as a bot.

@devsanthoshmk I think it's a good idea to use your suggestion as a backup in case @bandrehc's solution runs into any obstacles or issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants