-
Notifications
You must be signed in to change notification settings - Fork 255
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue at every first run #26
Comments
Are you sure you are running the latest code? requests_oauthlib is included in both the requirements.txt and the setup.py |
Sorry to say @edsu but now I got on OSX same results of the old Debian box:
Executed same query of yesterday, results:
Here is results of utils/summarize.py with --scrape :
and without --scrape:
|
I don't understand this ticket. I thought you opened it because you were getting an error about the missing requests module? |
I opened because requirements were apparently all installed properly since the beginning. And tried to reinstall Twarc anyway getting what I posted with the issue.
I did an --upgrade anyway. Other requirements were ok.
Main issue is it re-download all same tweets since the beginning instead since last IDS saved. |
I'm afraid I still don't understand your problem. Would removing the --scrape functionality help you? |
It doesn't help, It doesn't matter if using or not --scrape. Simply solving this issue I got back a previous opened issue for which I stopped to use a previous Debian box.
So, I solved initial issue about "requests" but appeared a new issue: |
I got this for a while on some Debian boxes, now the same on my clean OSX box. At first run, it takes time but it will stop like if requests package isn't installed. In this case when still remains 119 API attempt. All next run work fine, but first fail limits tweets results.
It happens with and without --scrape. The example is with lots of tweets, if results are limited seems working fine.
$ twarc.py --scrape "#moncler #report"
The example is getting tweets, so you could relaunch it. It will work. Drop all *.json and starting again it fails in the same way.
The text was updated successfully, but these errors were encountered: