You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 2, 2022. It is now read-only.
The directions and surface APIs both require ; delimited lon,lat coordinate pairs. The directions API takes a json payload with a coordinates array. Future APIs may require different encodings for geometry inputs.
But python users are going to be dealing with different data structures: geojson-like objects, shapely geometries, fiona datasources, geodataframes, etc. All of these implement the __geo_interface__ so that's a big win.
So Question 0: should we provide encoding functions or just leave it up to the user? If we assume the former, let's spec it out and define the behavior a bit more precisely.
Question 1 Should the encoding happen external to the API wrapper or internally?
I wrote a vector data abstraction function for rasterstats that solves a similar problem - pass it an object and it will try it's best to get an iterable of feature-like dicts out of it. Maybe a good model, maybe not depending on how liberal/strict we'd want to be with inputs.
Question 3 Validation.
Should the encoding enforce known limitation on the API such as min/max number of points, geometry types (e.g. points only), precision requirements or others? Will geometries be "cast" i.e. multipoint encoded as a series of single points?
Just an idea at this point. But if we do it (and do it right) it could make interop with other python geo tools ridiculously smooth.
The text was updated successfully, but these errors were encountered:
0: provide
1: internally
2: let's standardize on an iterator over objects that provide geo_interface or are GeoJSON-like mappings
3: letting users turn a GeoJSON line into a string of waypoints would be a nice feature, so let's do think about casting. In the geocoding part of the SDK we do restrict the precision of coordinates and should do it for directions and distance if necessary.
The directions and surface APIs both require
;
delimitedlon,lat
coordinate pairs. The directions API takes a json payload with acoordinates
array. Future APIs may require different encodings for geometry inputs.But python users are going to be dealing with different data structures: geojson-like objects, shapely geometries, fiona datasources, geodataframes, etc. All of these implement the
__geo_interface__
so that's a big win.So Question 0: should we provide encoding functions or just leave it up to the user? If we assume the former, let's spec it out and define the behavior a bit more precisely.
Question 1 Should the encoding happen external to the API wrapper or internally?
Question 2 What can
features
consist of?I wrote a vector data abstraction function for rasterstats that solves a similar problem - pass it an object and it will try it's best to get an iterable of feature-like dicts out of it. Maybe a good model, maybe not depending on how liberal/strict we'd want to be with inputs.
Question 3 Validation.
Should the encoding enforce known limitation on the API such as min/max number of points, geometry types (e.g. points only), precision requirements or others? Will geometries be "cast" i.e. multipoint encoded as a series of single points?
Just an idea at this point. But if we do it (and do it right) it could make interop with other python geo tools ridiculously smooth.
The text was updated successfully, but these errors were encountered: