-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ciudad Mexico #55
Comments
Sure: # from: https://www.ecobici.cdmx.gob.mx/es/informacion-del-servicio/open-data
u = "https://www.ecobici.cdmx.gob.mx/sites/default/files/data/usages/2018-01.csv"
u_static = "https://www.ecobici.cdmx.gob.mx/sites/default/files/data/usages/"
download.file(url = u, destfile = "input-data/month-data.csv")
# create loop to download every month
yrs = 2011:2018
months = formatC(1:12, width = 2, flag = "0")
for(y in yrs) {
for(m in months) {
um = paste0(u_static, y, "-", m, ".csv")
print(um)
destfile = paste0("input-data/", y, "-", m, ".csv")
download.file(url = um, destfile = destfile)
}
} |
This shows a first attempt at linking them with OSM data. The results are unreliable because the docking stations do not have correct names. library(tidyverse)
library(osmdata)
library(stplanr)
library(sf)
data_raw = read_csv("input-data/month-data.csv")
# get cycle hire points
q = opq("mexico city") %>%
add_osm_feature(key = "amenity", value = "bicycle_rental")
osm_data = osmdata_sf(q = q)
stations_osm = osm_data$osm_points
class(stations_osm)
#> [1] "sf" "data.frame"
# task: load high quality data in sf class here
stations = stations_osm # replace with official data
str_extract("Ecobici 168 balkdjf", pattern = "[1-9][1-9][1-9]")
station_ids = str_extract(stations$name, pattern = "[1-9][1-9]?[1-9]")
# station_ids[station_ids == "1" & !is.na(station_ids)] = NA
stations$ids = station_ids
stations = select(stations, ids, amenity)
plot(stations)
mapview::mapview(stations)
data_ag = data_raw %>%
group_by(Ciclo_Estacion_Retiro, Ciclo_Estacion_Arribo) %>%
summarise(flow = n())
sum(data_ag$flow)
data_top = data_ag %>%
top_n(n = 200, wt = flow)
sel = data_ag$Ciclo_Estacion_Retiro %in% stations$ids &
data_ag$Ciclo_Estacion_Arribo %in% stations$ids
summary(sel)
data_top = data_ag[sel, ]
summary(data_top$Ciclo_Estacion_Retiro %in% stations$ids)
lines = od2line(flow = data_top, stations) |
Thanks to @luis-tona for getting me onto this, he's on it! Any updates Luis? This may lead the bikeshare data to be made available online for everyone (which could be an amazing side benefit of the project). |
Great, I'll check out the data source asap. A huge part of˙bikedata` is just data cleaning and stuff like station name matching, so no biggy there. I'll just have to ensure an ongoing commitment to data availability |
Data sources can be extracted as "https://www.ecobici.cdmx.gob.mx/sites/default/files/data/usages/YYYY-MM.csv", starting at 2010-02. |
The trip files have only station numbers. These are all given on the official station map, but the raw data are all buried in a drupal scheme, so can't be accessed. There's supposed to be an API, but I applied for a key and just received an empty email. I've written to ask them about providing data, and/or help with the API, and will have to wait for a response. |
Tricky. I know Luis is struggling to make sense of the data also which must be frustrating as it seems to all be there, just not in a joined-up way. Let us know how you get on + cheers for the updates. |
code to get the data files:
|
Great progress - is that all of them though? Heads-up @luis-tona - may be of use for your project. |
nah, that's just me dumping the generic code. Any |
Note that GBFS data are available for Guadlajara, but not Cuidad Mexico. Still have to wait on this one. |
We've made great progress cleaning the datasets - 100 million+ journeys (rows of data) logged. |
1 similar comment
We've made great progress cleaning the datasets - 100 million+ journeys (rows of data) logged. |
@Robinlovelace Can you please provide details of the Mexico City data you mentioned? It'd be great to incorporate that if possible
The text was updated successfully, but these errors were encountered: