New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get projects information? #48
Comments
Hi @mcalvera, that information would come from the Unfortunately those endpoints aren't yet implemented in pyinaturalist. You can see a list of currently implemented endpoints here, by the way. I can certainly add this to the list of endpoints to work on next, if that's something you would use. Can you look over this list in the API docs and let me know which endpoint(s) you would like added to pyinaturalist? The main |
@mcalvera I went ahead and added two of the If you'd like to test it out now, you can install the development build with Let me know if that does what you want, or if there are additional endpoints or features you'd like to see. |
@mcalvera FYI, these changes are now available in the latest stable release. I am going to close this issue now; feel free to open another one if there are additional endpoints you would like added. |
Thank you so much! |
Dear @JWCook , Can I get more than 10 projects with get_project() function? It seems that it only extract 10 projects. Thank you for your answers! |
@mcalvera Sure, that's controlled with the pagination parameters: >>> response = get_projects(q='invasive species')
>>> print(f"Page: {response['page']} | per page: {response['per_page']} | total results: {response['total_results']}")
'Page: 1 | per page: 10 | total results: 153' So that means you're just seeing the first page of results, the default page size is 10, and with 153 total results there would be 16 total pages. You can get the next page with >>> response = get_projects(q='invasive species', page=2)
>>> print(f"Page: {response['page']} | per page: {response['per_page']} | total results: {response['total_results']}")
'Page: 2 | per page: 10 | total results: 153' This is basically doing the same thing as searching for projects on the iNaturalist website, when you see a 'Next' button and a list of page numbers at the bottom of the page: You can also increase the page size, which is probably what you want. The max page size varies for different endpoints, but for most if them it's 200 results. So that means you can do this: >>> response = get_projects(q='invasive species', per_page=200)
>>> print(f"Page: {response['page']} | per page: {response['per_page']} | total results: {response['total_results']}")
'Page: 1 | per page: 153 | total results: 153' Let me know if that answers your question. P.S., |
Another thing to note is that for other endpoints that tend to return large numbers of results, we have functions to do the pagination for you and fetch all the results at once. those are the functions starting with If you need to do a lot of searches with more than 300 results, we could add a |
@JWCook Thank you so much! It works to me. I think that I can use a for to extract all the pages with the page and per_page elements. I try to extract the European projects and I am doing it with lat, lng and radius, that is why I need to extract more than 10 or 300 projects. The function you developed is really useful! |
@JWCook It is possible to have the project URL? For example, for project "Vascular plants of Ham Lands Local Nature Reserve" get the URL "https://www.inaturalist.org/projects/vascular-plants-of-ham-lands-local-nature-reserve". Thank you in advance! |
Yes, it looks like the API response doesn't include the full URL, but you can get that using either the project >>> response = get_projects(q='vascular plants ham lands')
>>> project = response['results'][0]
>>> # Short URL
>>> f"https://www.inaturalist.org/projects/{project['id']}"
'https://www.inaturalist.org/projects/19506'
>>> # Long URL
>>> f"https://www.inaturalist.org/projects/{project['slug']}"
'https://www.inaturalist.org/projects/vascular-plants-of-ham-lands-local-nature-reserve' Either URL will work. If you use the short URL (using the project ID), you'll be redirected to the long URL. |
It works! thank you so much. |
@JWCook I was looking for a example of that, but the link is broken. EDIT: upgraded to 0.18.0 and still cannot import that one |
@abubelinha That info is outdated; sorry for the confusion. You can use the regular get_observations() for this. All paginated endpoints now support the "get all" behavior (more info here). Example: get_observations(user_id='my_username', page='all') |
Ah OK, I see. I was asking something else but I was not sure if this was the appropriate thread. Thanks a lot indeed! |
Hello,
Thank you for the work done, is really useful. I would like to know how can I get all the projects information, Is it possible?
Thank you in advance,
Miriam
The text was updated successfully, but these errors were encountered: