We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
您好! 自己想爬微博@了谁的信息,例如 其中的@叶婉婷cici,想要他的基本信息,chrome检查元素的内容是<a href="/n/%E5%8F%B6%E5%A9%89%E5%A9%B7cici">@叶婉婷cici</a> 我在您的search分支weibo_spider.py代码的基础上增加了yield Request(url=self.base_url+href, callback=self.parse_atwho ),但是在运行爬虫的时候总是提示
<a href="/n/%E5%8F%B6%E5%A9%89%E5%A9%B7cici">@叶婉婷cici</a>
yield Request(url=self.base_url+href, callback=self.parse_atwho )
[scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://weibo.cn/n/%E5%90%8D%E4%BA%BA%E5%9D%8A%E9%97%B4%E5%85%AB%E5%8D%A6> (failed 3 times): TCP connection timed out: 10060: 由于连接方在一段时间后没有正确答复或连接的主机没有反应,连接尝试失败。.
请问该如何解决?
The text was updated successfully, but these errors were encountered:
这个地址是可以正常访问的,讲道理是不应该的啊
Sorry, something went wrong.
嗯,用scrapy是不行,最后直接用request.get简单粗暴的解决了
No branches or pull requests
您好!
自己想爬微博@了谁的信息,例如
其中的@叶婉婷cici,想要他的基本信息,chrome检查元素的内容是
<a href="/n/%E5%8F%B6%E5%A9%89%E5%A9%B7cici">@叶婉婷cici</a>
我在您的search分支weibo_spider.py代码的基础上增加了
yield Request(url=self.base_url+href, callback=self.parse_atwho )
,但是在运行爬虫的时候总是提示请问该如何解决?
The text was updated successfully, but these errors were encountered: